The Sidley Podcast

Superpowers — and Potential Perils: Deploying AI for Business

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:23

Generative artificial intelligence (AI) is taking the world by storm, promising to enhance efficiency and revolutionize business. Corporations are embracing the technology, with AI market size expected to reach $1 trillion by 2030. But companies that deploy AI for their operations are confronting risks involving litigation, privacy, and ethics. And regulators are struggling to keep pace with its rapid deployment, leading some to fear AI is out of control — that it could spawn a surveillance state and render portions of the workforce obsolete.  

How can businesses best employ AI efficiently? What are its superpowers — and potential perils? And what’s in store for the future of AI technology?

Join The Sidley Podcast host and Sidley partner, Sam Gandhi, as he speaks with one of the firm’s thought leaders on these issues — Dave Gordon, a member of the firm’s Executive Committee, the co-leader of the firm’s Commercial Litigation and Disputes practice, the head of the Litigation group in the firm’s Chicago office, and the co-chair of the firm’s Artificial Intelligence Council. Together, they discuss who is using AI and why, the risks and best practices to deployment, and what to expect in the ever-changing legal and regulatory landscape. 

Sam Gandhi:

Generative artificial intelligence is taking the world by storm, promising to enhance efficiency, create more free time, and revolutionize business. Businesses are embracing the technology, with AI market size expected to reach a trillion dollars by 2030, but companies that deploy AI for their operations are confronting risks involving litigation, privacy, and ethics, and regulators are struggling to keep pace with its rapid deployment, leading some to fear AI is out of control, and that it could spawn a surveillance state, and even render human workers obsolete.

 

Dave Gordon:

The biggest risk that high-level, sophisticated people see is the risk of not using it at all. People understand that this is the future, or at least a part of the future, and while quite different in function, it would be like people saying email is too dangerous or complicated, we're not going to use it. So, I think the more sophisticated people understand that this is not just coming, it's here, but we have to find a right way to use it.

 

Sam Gandhi:

That's Dave Gordon, member of our executive committee, co-leader of the firm's Commercial Litigation and Disputes practice, and the co-chair of the firm's Artificial Intelligence Council. How can businesses best employ AI efficiently? 

 

What are its superpowers and potential perils, and what's in store for the future of the technology? We'll find out in today's podcast. From the international law firm Sidley Austin, this is The Sidley Podcast, where we tackle cutting-edge issues in the law and put them in perspective for business people today. I'm Sam Gandhi. 

 

Hello, and welcome to this edition of The Sidley Podcast. Dave, it's great to welcome you to Episode 50 of the podcast.

 

Dave Gordon:

Thanks, Sam. Delighted to be here with you. 

 

Sam Gandhi:

It would be an understatement of the year to say artificial intelligence has infiltrated almost every area of our lives, even our beloved annual pastimes. A day after the big game, Sports Illustrated said it took less than a quarter for NFL fans to be completely sick of AI ads during Super Bowl 60. 

 

The AI trend has handily replaced the glut of crypto spots that we saw from last year's Super Sunday. In the corporate world, according to Forbes, nearly three out of four companies have started using AI for at least one business function on a regular basis.

 

But in a cautionary sign, 3 of 4 consumers are concerned about misinformation from AI. So, Dave, before we even get into the business case for AI or the issues that people may have with it, just let's set the playing field in terms of what we mean when we refer to AI and what do people use it for? 

 

Dave Gordon:

Sure, Sam. AI can mean a lot of things, but what people are using that term to mean these days mostly is generative AI. AI is a term that's been around for a long time. Generative AI is something that is relatively new, the last few years, really, and it involves the program or the platform creating data, thinking, people say, language, and other things to generate material for us to use.

 

Now, how does it work? This is very important to all the strategy around it. It's not magic. It's not actually thinking, although the CEO of Anthropic said this month, we don't know if the models are conscious. I think what he meant there is getting to the point that we don't really know, as human beings, or can't agree on what is thinking versus what is very, very clever pattern recognition. 

 

But what's really going on here in generative AI is taking an enormous dataset, looking at how words are used together, or pictures are used together, or music, and predicting, based on what people are asking, what the most common or right pattern of words are, and that sounds very simple and not that useful, but at the scale that it is, it can create novels and legal briefs, and songs, and all other things that are sometimes hard to distinguish from the real thing. So, is it magic? Is it thinking? Hard to say really what it is, but it's very important to understand how it works.

 

Sam Gandhi:

Let's talk about that a little bit. When you say, is it thinking, when something's kind of predictive and it's coming up with an output, isn't that kind of thinking?

 

Dave Gordon:

It resembles a certain kind of thinking. So, what I would call it more is predictive reasoning. So, it's very good at doing things that are routine. It's very good at doing things that are repetitive. So, for example, it's excellent at generating an email saying, thank you for the interview that you conducted of me yesterday, or thank you for inviting me to the party, or even a set of rules that you might put together or policies. For example, earlier this week, I was working on a policy for Sidley about how we give credit to people for using AI.

 

AI helped me with that, and it's probably saved me an hour in writing. So, it's good at things that are easy. What's less clear is, and this is in part because we don't totally know how the human brain works, but what is less clear is whether it can be truly creative, exercise judgment, or provide the kind of insights that experienced human beings can.

 

And when we think about the future, and I know we're going to talk about this, that's a very important distinction, because I can ask someone to write a first draft of something that is very straightforward and maybe get something pretty good, but what I can't really do is rely on someone to draft a legal brief that's going to be anywhere near the quality of what we would expect for our clients or at a firm like Sidley.

 

Sam Gandhi:

So, do you think these models, at some point, are going to fill in what's probably lacking, which is kind of the human emotion? Do you think that at some point these predictive models are going to be capable of emotion to resemble what humans actually do, which seems to be missing from what you're talking about now?

 

Dave Gordon:

I think it's a very difficult question to answer whether they will actually experience emotion. I suspect not, although some people disagree with that, but will they be able to replicate emotion? Yes, but in a way that I think will still be less human than human, and I'll give you an example. I was using an AI model this morning to ask a question about something substantive.

 

And the AI came back to me and said, would you like me to put this in terms that a 16-year-old would understand it, because I know you were on vacation with your son, smile emoji. I'm not on vacation with my son. I'm in my office here in Chicago, but I did ask a few weeks ago for some vacation plan ideas for two of my sons, and it gave them to me, and it assumed that I was on that vacation now and tried to be funny about it. 

 

So, yes, it is going to try to replicate emotion, but right now, it's nowhere close, and I don't know that it will actually get to a point where it will be able to fool humans in such an intense way that we will use it instead of human beings for things that are important.

 

Sam Gandhi:

I don't know whether that's amazing or creepy. 

 

Dave Gordon:

Things can be both.

 

Sam Gandhi:

That's fair. As lawyers, we seem to hear, all the time, almost like every couple of weeks, about a situation where judges around the world who are contending with legal briefs that were generated with the help of AI only to be submitted with errors such as citations to cases that don't exist, and somebody is just not doing the human work, or the human backup work. In your work on the ground with clients, what are you seeing now as the greatest risks or liabilities to adopting generative AI, particularly in the legal arena?

 

Dave Gordon:

So, I'll start with the hallucinations, because that's where you started, and that's what people are most worried about. Drafting entire legal documents without a human in the loop is a huge error and a huge risk. You need to be able to verify the accuracy of something you're putting together, and it's one thing if it's marketing materials or an internal memo, which still should be accurate. It's quite another if you submit something to a court.

 

And several firms and several companies have already gotten caught using AI, which is hallucinating cases that don't even exist. It's important to understand why that is. The AI largely is trained to be sycophantic. It's not going to say, no, I can't help you, although some of the models are getting better and you can tune them to be less so. 

 

Sometimes when it doesn't have an answer, it creates one that doesn't actually exist. Now, I should say, anyone who used prior models, a year ago or two years ago, and particularly the public versions of them, rather than enterprise closed systems that law firms use, are missing the boat on this, a little bit, because the tech has gotten a lot better even in the last few weeks.

 

But it is very important, and judges will not excuse citing a case that doesn't exist or a fact that doesn't exist in a legal brief, because that's a representation to the court, and we take that seriously because it sounds a lot more like defrauding the court, even if it's not intentional. Now, what's super interesting about this is that judges themselves are starting to use AI, and a couple of them have been caught using it and hallucinating cases in their work.

 

So, we all should be careful, but what I'm sure of is that judges are super interested in this. I'll give you two examples. I appeared in front of a federal judge a few months ago on a case that was our first appearance. Those are usually pretty important to understand what the case is about. The very first question the judge asked was, I want to understand if you're using AI in any of your submissions, and if you are, you need to tell me.

 

Now, that's a disclosure kind of rule, but it really is a deterrent because who's going to want to come in and say, yes, we're using AI without then explaining all the backstops that are there. Similarly, I ran into a judge two months ago on an airplane, who I'm friendly with, and once we got off, he waited for me because he wanted to talk for half an hour about how firms are using AI because he wanted to understand it better.

 

It's very much on their minds. So, that's the legal arena as far as my experience as a litigator. There's lots of pre-litigation risks that we're advising clients on, and I'll just talk about a few. One is some of the functionality of the AI has to do with summarization and recording, and there are lots of risks to that because, for example, one, in some states, you need the consent of everybody to record something or to summarize it. Another is the summaries aren't always accurate.

 

These are the hallucinations, and so, you're creating a record for discovery in the future that may be inaccurate, but no one's going to have the memory to dispute that. There are also privacy risks there about what the models are training on, where the data goes, and if you're not careful with terms and conditions, who's going to get access to that data?

 

And that leads to another risk, which is confidentiality and legal privilege, and there's a lot of stuff going around there, right now. And if you think about it, there's a big risk, and some courts are talking about this, about the AI itself being a stranger and therefore waiving privileges.

 

Because even if a lawyer's in the room, there's someone who's not in the circle that is waiving the privilege, effectively. So, there are a lot of risks to think about, and there's a lot of risk mitigation to be done long before litigation occurs.

 

Sam Gandhi:

Let's talk about the rapidly evolving technology that's being used. It seems, as you said, there just seems to be a new version of these models every week or every day, etcetera, and while we're used to adapting new technology on a regular basis for everyday tasks, this seems to be much more rapid. When you advise companies that are perhaps new to implementing AI technology or new forms of this technology, what are the priority best practices that you share for them, given these risks that you just illustrated?

 

Dave Gordon:

So, there are quite a few, and let me start by saying it's highly contextual. So, what I'm about to say is not a list that you can just run with. For example, the rapid development of the technology in the coding sphere has way outpaced a lot of the other use cases that we're seeing.

 

Many coders, for example, are out of work or having to retrain themselves because the AI is doing the coding, which, by the way, is making the AI even better. There was an article that came out last week that basically made this point that doing coding first for AI has allowed AI to become stronger and more quickly that way, but as a general matter, some of the things that clients should be thinking about are the following.

 

One, absolutely do not allow your people to use public versions of the AI for sensitive information or business information. Because without a sandboxed enterprise system, you are giving away your corporate data to whoever the LLM company is, and the terms of use say that, and anything sensitive is now no longer under your control.

 

Second, have your people start to play with this, gain familiarity with the tools. It's really interesting to me, everyone has an opinion about AI. Not as many people are using it as you might think. The papers would tell you that everyone is using it all the time, but there's a wide range.

 

There's some very sophisticated users and some people that are still dipping their toe in the water. There is no substitute for using the technology to understand what it is and what it isn't and how it's going to work, whether it's on corporate information or planning your vacation, as I said.

 

Verify everything. Keep a human in the loop on everything that you do because the AI is going to make mistakes, and the key insight here is AI is going to make your quality better as long as you keep circling back into it and giving it feedback. 

 

Relatedly, assume inaccurate output. The hallucination rates are going down, but at one point they were estimated to be at 20% or 10%. I think they're getting to be less than that, but you can't assume it's all going to be right, and relatedly, it's going to be biased, and so, you need to check for that.

I was with a group of people a couple of years ago, and they decided to make a playlist based on the artists that they liked and the songs that they liked, kind of bring it together in a diverse way. I put that playlist into AI at the time, and again, it's gotten better, and it decided that this group of people was only interested in '70s rock bands, with all white musicians, and the inputs were not that. There was a bias inherent in the response.

 

Now, again, getting better, but you've got to be careful about that. Again, being careful with sensitive or confidential information and reassessing at all times, and there are ways to do this through policies and training and otherwise, which I'm happy to get into, but those are some of the big-picture thoughts.

 

Sam Gandhi:

We're talking about using AI as if it's a choice, but in reality, it's not really a choice, right? If you Google something or you go into some search engine and you type something in, the first thing that shows up is the answer that's derived from the generative tool that the browser is using. 

 

Presumably, if you go on a music app, no matter which one you're using, that's being generated by AI. Presumably, anything that you're using the internet or a choice for, you're using AI. Is there any way to avoid AI here and avoid these biases, or are they already just built into our everyday life at this point?

 

Dave Gordon:

Well, they are built in, and there are a couple of interesting aspects to your question there. One is, yes, you can choose to avoid it. You just skip that part under Google and you go to the traditional results. I know people who are affirmatively doing that for reasons of convenience or distrust or ethics that we'll get to. 

 

The other thing is that when you're using AI, it's actually not that different in some of these respects than using a search engine. The search engines themselves are not confidential. They remember what you do. They sometimes have agreements with your social media companies, and so ads pop up and things like that. That's very common, and we all expect that at this point.

 

And so, many of the dangers of AI are inherent in existing tools, but they're on steroids in some respect, because people are going to start to trust the AI output because it's so much better. And so, we just have to remember what it is and what it isn't, so we think about the risks and we don't just assume, because it looks a lot more like something we would put together, that it must be right or must have been checked.

 

Sam Gandhi:

We started the podcast with the view that 3 out of 4 companies are using AI for one task or another, and when you're advising boards, along with cybersecurity risk, I imagine the Generative AI is a large, large topic, at least it is for the boards that I talk to. So, when you advise a board, what do they talk about? Do they talk about AI being beneficial, the risks, etcetera, or all of the above? What are those discussions like?

 

Dave Gordon:

There's a huge range in this discussion, as you might imagine, but the biggest risk that high-level sophisticated people see is the risk of not using it at all. People understand that this is the future, or at least a part of the future, and while quite different in function, it would be like people saying email is too dangerous or complicated, we're not going to use it. So, I think the more sophisticated people understand that this is not just coming.

 

It's here, but we have to find a right way to use it. For the companies, the kinds of things that we're talking about to boards, and that board members and C-suite people are talking about, are, again, putting in written policies, making sure there's training, and most importantly, right now, we're at an inflection point where trying to get people from where they are now, or many are now, which is let me use it when I've got a question, to a truer use of adoption, putting it into our systems, finding ways to use it more systematically to advance the goals of the organization.

 

And so, those are the kinds of things that people are talking about, and on the other side, as this develops, people are talking about AI-washing, basically overstating what the things are that it's being used for, which I think there may be quite a lot of going on out there. When you think about the board members themselves, this is a very interesting question, too.

 

Because if you think about who's on a big public board, there are two things we know about those people, for sure. One, they're very smart and accomplished, and two, they're very busy. And so, when these kinds of people are offered tools that will, for example, summarize the discussion at a two-hour board meeting, or summarize the big board packets that are posted on their internet sites, it's very tempting for them to use those, and they have duties to make sure they are fully informed before making decisions.

 

And so, thinking about making things easier for these people while they still do their jobs and actually add their value is the really critical tension there, and most importantly, and I think this is more broadly applicable, board members are there because they have experience, they have judgment, and they're bringing creativity based on their years of experience to a particular industry.

 

And that's the kind of thing that AI is not replacing and is not going to be able to replace for a long time, if ever. And so, board use of AI is a really interesting topic, right now, to make things easier so they can do things more quickly, to be a sounding board so that people can do it better, but not to substitute for their judgment.

 

Sam Gandhi:

Well, the regulators clearly are struggling to keep pace with AI, and because of the rapid advancement and adoption of AI, there's obvious tension between the allure of the technology and concerns that you highlighted before about like the fear of surveillance. 

 

Recently, for example, a congressional hearing, Building an AI-Ready America, Adopting AI at Work, revealed divisions between witnesses who argued existing employment laws are sufficient and those who said workers need stronger protections against surveillance. So, how do you see regulations and legislation taking shape in the U.S., as well as in Europe, which seems to diverge a bit from us?

 

Dave Gordon:

This is a complex question that's going to change rapidly over time. And so, I'm going to take a step back here and go back to my fundamental view, as I've said with boards, that when you think about change and what's coming, think about who the people are that are involved in that change, think about what their values are, and think about who benefits. 

 

And so, if you think through that lens, you would not be surprised to find that when you compare the EU to the U.S., for example, at least the federal government, the EU is faster on regulation because it wants to be more regulatory and much more concerned about privacy interests and confidentiality and other things, similar to GDPR in that way. The federal government, on the other hand, is still working. There's no federal legislation that's passed. There have been some executive orders, both by President Biden and then by President Trump.

 

But in general, the direction seems to be, let's find a way to be innovation-friendly and also be dominant in the market, particularly as against China and other countries. And so, you have very different values there between Europe and the U.S. as to what is most important, and I think we will see legislation, as it progresses, adhere to those values.

 

Now, an important nuance to that is in our federalist society, not all states agree with where the federal government is going. And so, you see California, for example, again, no surprise when you think about their regulatory history in other ways, they have almost 20 AI laws on the books already, across a wide range of things. Texas is starting to get into this in a slightly different way.

 

Our own American Bar Association is getting into this as far as lawyer practice goes, and the California State Bar and the Illinois State Bar are all issuing policy statements. So, I'm not going to predict what all of these entities are going to do, but I will say I will be surprised if they divert from their core values in other instances and other contexts. I think they will just apply them to this context.

 

Sam Gandhi:

If you're interested in information on the energy industry, tune into the latest episode of Sidley's Accelerating Energy podcast, hosted by our partner, Ken Irvin. Ken was joined by Terence Healey of Sidley's Energy Practice and Todd Snitchler, President and CEO of EPSA. They discuss how regional transmission organizations are responding to large load growth, the implications of FERC's co-location order, the White House's intervention, and the expanding role of states in an increasingly stressed power market.

 

You can subscribe to Sidley's Accelerating Energy podcast wherever you get your podcasts. You're listening to the Sidley podcast, and we're speaking with Dave Gordon, co-chair of Sidley's Artificial Intelligence Council, about who's using AI and why, the risks and best practices to deployment, and how regulation and legislation are taking shape across the U.S. and Europe. 

 

We've talked about the benefits and some of the risks of AI in terms of its ordinary use, but some people view AI as nothing short of an existential threat to businesses, people, and like our very existence. Sam Altman, CEO of OpenAI, famously said, “I think AI will sort of lead to the end of the world.” Is that overblown?

 

Dave Gordon:

It's a really dangerous question, but I suppose if I'm wrong, none of us will know about it, but I think it is overblown, and I think Sam Altman, in the article where he said that, or the interview where he said that, acknowledged that it's unlikely that AI will develop human consciousness and nefariously destroy the world.

 

But his point, I think, was that it doesn't matter what its intent is. It matters how its programming runs, and so, the key insight here is we are early enough that we can put controls in place on this, and people are talking about this, such that it makes it much less likely.

 

And again, I think when Sam Altman talked about this, he put the percentages at an exceedingly low place, but as long as we keep the humans in the loop and put controls in place, we can make sure that the models continue to get tuned and doesn't lead to these catastrophic outcomes. Part of the issue here is, I believe, that AI is not going to replace everything that we do.

 

And the people who are predicting the end of the world, I think, are predicting that you and I are going to end up on a beach somewhere, with all the other human beings, and the AI is going to do all the work for us, and then take over everything and decide we're not necessary. I don't think that's likely when you think about the value of AI. 

 

Similarly, and a little less dramatic, although still very important, is the prediction that people are going to be out of their jobs, or at least many people are going to be out of their jobs, and I think there's some truth to that, that some jobs are going to go away, but the central insight I have on that is that you have to think about the entire ecosystem. So, take law, for example.

 

I don't think senior lawyers are going to be replaced, for all the reasons we talked about board members not being replaced, but I also think that to the extent that there are lower-level tasks that AI can help with, the fact that AI can help with that and do it quickly is going to mean people are going to do more work. There are going to be more deals.

 

In the litigation space, for example, I know of clients that don't bring cases because it's not cost-effective to do it. There's going to be a revolution of pro se plaintiffs that use AI to draft complaints, and they're going to look a lot better than the handwritten complaints that I used to see when I clerked for a federal judge. And so, there's going to be a lot more work to do that maybe an individual lawyer, for example, is going to spend less time on a particular case, but there are going to be more cases.

 

And again, as with everything and every revolution we've ever seen, there's a flight to quality as far as people who have a niche and know what they're doing will have things to do, as long as they keep up with the AI revolution. I do think that the people who say, I'm either going to do my job the way I've always done it, or I'm not going to do it at all, they will be left behind. 

 

But as with every other technological advance we've seen, AI is going to create jobs. It's going to create more product. It's going to create more services, and I think a lot of this end-of-the-world stuff is overstated, at least based on where we are right now.

 

Sam Gandhi:

We've talked about the ubiquity of AI, but there have been a lot of questions in terms of how accessible AI really is, and one of the ethical debates surrounding the use of AI, for example, is the gap between rich and poor. 

 

Senator Bernie Sanders recently said, in a talk with the godfather of AI, computer scientist Geoffrey Hinton, that it's not whether AI is good or bad, it's who controls it and who benefits from it, similar to what you said that Sam Altman basically said earlier this year. But what happens to people who don't have the means, right now, to use AI?

 

Dave Gordon:

They're absolutely at a disadvantage, and this is true, again, with every other technological revolution we've had. When portable phones started to be used, they were used by the rich. They were super expensive. They were gigantic, and they sat in people's cars. Now, many, many more, and I'm not saying everyone, but many, many more people have phones and smartphones to begin with.

 

When you think about the idea 50 years ago, saying that lots of people, across all sorts of economic strata, would have very powerful computers sitting in their pockets, people would say that's crazy and that that would always be kept from everyone and just kept for the rich, and that doesn't turn out to be true, in fact. 

 

There is a lag, however, and that lag is what's important because AI is power. There's information to it. There's productivity to it, and I think Senator Sanders is right that the people who control it and who advance with it are going to enhance their power with it, and so, we do need to be thoughtful about that. 

 

There are a lot of other ethical questions that are coming up in this context, and people are very concerned about that, and I think as particularly this expands and people use it and see what it is, there will be a lot more debate about that, and that involves environmental considerations and also considerations about how people are using the AI as companions. There's been a lot of talk about that, and I think that's a very nuanced and potentially dangerous topic.

 

Sam Gandhi:

Given these type of ethical considerations, should we worry that AI is just unbridled, that it's poised for unlimited growth? I mean, I don't think we've ever seen a situation in the history of the world in which technology continues to evolve and for some reason we pull it back. It just continues to improve and make things more productive and more efficient, whether we like it or not.

 

Dave Gordon:

I think you're generally right. That doesn't typically happen. There are a couple constraints here that might cause things to slow down. One is that the energy requirements, the data centers, are not unlimited, and there are a lot of people who are concerned, particularly the next generation of people who are very concerned and talk about this quite a lot, about how much energy AI uses. And so, there may be a limit to that energy and some costs to using AI as it rolls out to more and more people and more and more usage.

 

There also may be a response politically, as there is a public intolerance either for the errors that AI puts into a system. We talked about court filings with errors before, but what about coding errors that lead to blackouts or explosions or other things? I'm not predicting those will happen, but if one were to happen, we have seen in the past nuclear power being a great example of this.

 

Many people think nuclear power is a really useful source of energy, but when you have a couple of events, Chernobyl for example, it causes people to slow, and that's one example where people have backed off a little bit. So, it kind of depends on responsible use there. I think, at the end of the day, political will is the kind of thing that's really going to restrict or enhance this, and whether there's a backlash and people are against it for one reason or another, we'll have to see.

 

But as long as it makes people's lives easier and causes their lives to be better, as much of the tech has been over the last 50 or 100 years, albeit net better, not better in every way, I think you're right that it's going to continue to grow and be more important.

 

Sam Gandhi:

Where do you see AI going from here? And I'm not talking about where this goes in 50 years, but like in the next year or two, where do you see AI going?

 

Dave Gordon:

I like how the question implies that two years is a short time horizon. It's actually a really long time horizon in this area. A couple of things, one, we're starting to see the move from generative AI to agentic AI, and what that means is, and we've talked about this, Sam, what that means is that people are going to have AI not only give them output, but then do something with that output, and we're starting to see that quite a lot.

 

And even in the last few weeks, a couple of the companies have released new models that allow much more agentic use of AI. And so, I think we are going to see that a lot over the next year or two, and there has to be some care with that, because when the AI gets to actually do something, you do risk some problems that occur.

 

And my guess is the benefits will outweigh the problems by quite a lot, but people bring attention to the problems. The other thing that I think people will start to realize, and I think people are to some degree, is that AI use is not primarily a tool to be more efficient. It does save time, that’s absolutely true.

 

But the real thing it does as it gets better and it hallucinates less is it allows us to be better at what we do. For example, I use AI quite usually, not to draft a brief, because of all the risks that I've talked about before, but to draft the other side's brief, to brainstorm with it about possible counterarguments, to make sure I'm not missing something.

 

And I promise you, as of today, when thinking about a litigation where I have some specialization, I'm better at that than the AI today. I may or may not be, at the end of the day, but I'm certain that AI is going to come up with some ideas that I will not, and I will still come up with some ideas that the AI will not, and so, it's the synthesis together that's going to yield better quality, as long as we keep human in the loop, and the primary benefit here is going to be better work, not just faster work.

 

Sam Gandhi:

I'm going to end the podcast with one last question, which is, do you think there's an area of society that's going to be unaffected by AI?

 

Dave Gordon:

That's the easiest question you've asked. The answer is no.

 

Sam Gandhi:

But do you think AI is going to be able to fix your toilet, or come and put up drywall, or walk your dog, or things like that?

 

Dave Gordon:

So, do I think that AI alone will be able to do any of those things by itself? Probably not. But if the question is, will it affect all of those things? Absolutely, yes. And if you think about a long time horizon, there are all sorts of developments about robotics, and maybe we can do that for the 100th podcast, but Gen AI, plus robotics, plus human involvement, a lot of those things may be done in the future. I think we're a ways off from that, but I think that is coming.

 

Sam Gandhi:

We've been speaking with Sidley thought leader Dave Gordon about the ethical concerns surrounding AI, what its limitations may be, what clients are asking him, and what's in store for the future, both long term and near term. Dave, this has been a great look at the landscape. It's been a little scary as well. Thanks for sharing your thoughts and insights on the podcast.

 

Dave Gordon:

Yeah, always happy to have a conversation with you, Sam, about anything. This has been a lot of fun.

 

Sam Gandhi:

You've been listening to The Sidley Podcast. I'm Sam Gandhi. Our executive producer is John Metaxas, and our managing editor is Karen Tucker. Listen to more episodes at Sidley.com/SidleyPodcast and subscribe on Apple Podcasts, or wherever you get your podcasts.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This presentation has been prepared by Sidley Austin LLP and Affiliated Partnerships (the Firm) for informational purposes and is not legal advice. This information is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. All views and opinions expressed in this presentation are our own and you should not act upon this information without seeking advice from a lawyer licensed in your own jurisdiction. The Firm is not responsible for any errors or omissions in the content of this presentation or for damages arising from the use or performance of this presentation under any circumstances. Do not send us confidential information until you speak with one of our lawyers and receive our authorization to send that information to us. Providing information to the Firm will not create an attorney-client relationship in the absence of an express agreement by the Firm to create such a relationship, and will not prevent the Firm from representing someone else in connection with the matter in question or a related matter. The Firm makes no warranties, representations or claims of any kind concerning the information presented on or through this presentation. Attorney Advertising - Sidley Austin LLP, One South Dearborn, Chicago, IL 60603, +1 312 853 7000. Prior results do not guarantee a similar outcome.