The Sidley Podcast

The Legal Battles Taking Shape in the Clash Over Internet Content

August 21, 2024

A federal law known as Section 230 has provided a powerful legal shield for internet companies for nearly three decades. Designed to “promote the internet,” it protects platforms from civil liability for content posted to their sites by third parties. 

 But the measure is inspiring lawsuits from plaintiffs who say it allows internet companies to escape accountability for harmful content. With the Supreme Court once again refusing to rule on the section’s validity this term, and Congress on a bipartisan quest to reform it, issues involving the First Amendment, child safety and technology innovation are very much in play. 

 What’s behind the backlash aimed at online platforms? And what’s on the legal horizon for the way they control content? 

 Join The Sidley Podcast host and Sidley partner, Sam Gandhi, as he speaks with two of the firm’s thought leaders on these issues — Randi Singer and Michael Borden. Randi is a partner in Sidley’s Commercial Litigation and Disputes and IP Litigation practices. Michael is head of Sidley’s Government Strategies group and a partner in the firm’s White Collar Defense and Investigations, Global Arbitration, Trade and Advocacy, and Crisis Management and Strategic Response practices. 

 Together, they discuss the social media cases that have cropped up involving content moderation, and what has inspired those cases from a legal, business and cultural perspective.

Executive Producer: John Metaxas, WallStreetNorth Communications, Inc.

Sam Gandhi:
A federal law known as Section 230 has provided a powerful legal shield for internet companies for nearly three decades. Designed to promote the internet, it protects platforms from civil liability for content posted to their sites by third parties, but the measure is inspiring lawsuits from plaintiffs who say it allows internet companies to escape accountability from harmful content. 

With the Supreme Court, this term once again refusing to rule on the section’s validity, and Congress on a bipartisan quest to reform it, it has left issues involving the First Amendment, child safety, and technology innovation much in play.

Randi Singer:
These platforms are private companies with their own First Amendment interests. So, if you try to tell them what content they have to leave up or what content they have to remove, then you’re impinging on their First Amendment rights of the platform.

Sam Gandhi:
That’s Randi Singer, a partner in Sidley’s Commercial Litigation and Disputes and IP Litigation practices.

Michael Borden:
There is a consensus that Section 230 needs to be changed, but when you talk to Democrats and you talk to Republicans, their concerns about Section 230 come from very different places.

Sam Gandhi: 
And that’s Michael Borden, head of Sidley’s Government Strategies group and partner in the firm’s White Collar Defense and Investigations, Global Arbitration, Trade and Advocacy, and Crisis Management, and Strategic Response practices. What’s behind the backlashing into online platforms, and what’s on the legal horizon for the way they control content? We’ll find out in today’s podcast.

Sam Gandhi:
From the international law firm, Sidley Austin, this is The Sidley Podcast, where we tackle cutting-edge issues in the law and put them in perspective for business people today. I’m Sam Gandhi.

Hello, and welcome to this edition of The Sidley Podcast, episode number 43. Randi, Michael, great to have you both on the podcast today.

Randi Singer:
Marvelous to be here, Sam. Thanks.

Michael Borden:
Thanks, Sam. It’s great to be here again.

Sam Gandhi:
Section 230 has been in the news a lot. Last year, the U.S. Supreme Court had had the opportunity to examine the scope of Section 230, but punted, and then in June, the court had another opportunity in a dispute involving Snapchat but declined to take the case. Congressional proposals to amend or abolish it abound, and defenders of 230 say it’s allowed the internet to flourish. Randi, if you could just break this down for everybody, and first of all, what is Section 230?

Randi Singer:
That’s a really important question, Sam, and I think origin stories are really critical, and I think sometimes the origin story of Section 230 gets a little bit lost here. It was enacted in 1996 as Section 230 of the Communications Decency Act. Actually, Section 230 was part of a much larger law that was originally about protecting children, and it imposed liability for transmitting obscene material to minors.

And it required filtering software in libraries, that kind of thing. At the time, the online community was mostly outraged by it, and people really hated the law, which I think is everything old is new again, because that really resonates with what we’re talking about today. 

Most of the original Communications Decency Act was struck down by the Supreme Court in a case called ACLU v. Reno in 1997, but Section 230 remained, and Section 230 is actually titled Protection for Good Samaritan Blocking and Screening of Offensive Material. 

It was added because there was a New York State case in 1995 in which Prodigy, which was one of the original online services, was held liable for defamation because one of its users had posted some defamatory material on its online bulletin board.

And Prodigy was doing some content moderation. It took down some of the defamatory posts, but not all of them, and so, the court held that Prodigy was liable for anything that it left up on its services. Section 230 was a reaction to that, and it provides immunity to interactive computer services when they’re acting as the publisher and not the speaker.

In other words, they have immunity when they leave up content that somebody else has posted, when another speaker has posted. It also provides them immunity when they take things down in good faith, which allows them to basically moderate the content and take down hate speech, misinformation, porn, all kinds of objectionable content.

It’s important to note that it’s not carte blanche immunity. There is no immunity for criminal activity, no immunity for IP infringement, and no immunity when there’s participation in sex trafficking or a variety of other things, and importantly, also, this only applies with respect to speech by another party that’s posted on the platform.

If the platform, itself, is creating the content, then the platform can be liable for that content, and you saw that in a case involving the speed filter that Snapchat established a couple of years ago. 

The theory of the case was that users were encouraged to go really fast, and when Snapchat was sued, the court said, look, that wasn’t something that some other party posted on Snapchat. That was a filter that Snapchat, itself, implemented, and therefore, Snapchat could be liable for that.

Sam Gandhi:
Just to focus on that, when you talk about platforms, does this only apply to internet platforms? Can this apply to print, somebody putting a newsletter out and handing it on the corner that includes other content? Where’s the limit, and where does the law end in terms of its applicability?

Randi Singer:
The law talks about interactive computer services, and it’s gotten a pretty broad definition from courts. It can be everything from what we traditionally think of as social media platforms, these days, but most recently, it was applied in a case involving an emissions defeat device, which you could buy the device, and you uploaded it to the internet, and you would buy certain…they called them tunes that went to your device.

And they got sued under the Clean Air Act by the EPA, and the court found that because the device connected to the internet and was interactive, it was actually, the district court found it met the definition of interactive computer service under Section 230 of the CDA. So, it’s pretty broad, and I think we’re still exploring what the limits are.

Sam Gandhi:
Michael, we talk a lot about the fact that this is not a liberal, or conservative, or Democratic, or Republican issue. It seems to be fairly bipartisan in terms of the larger culture, but what’s behind the growing discontent with Section 230, and what inspired what we would think of as bipartisan growing backlash around the country?

Michael Borden:
There is a consensus that Section 230 needs to be changed, but when you talk to Democrats and you talk to Republicans, their concerns about Section 230 come from very different places. The Democrats have been concerned that Section 230 has prevented significant and meaningful content moderation, and they want to see the platforms and social media spend more time engaged in meaningful content moderation targeting hate speech and misinformation.

Republicans, on the other hand, are also frustrated with Big Tech and with social media, but the Republicans’ concerns about Section 230 come from a different place. They want to apply the First Amendment to social media sites, even though they’re private actors. Donald Trump, for example, has been incredibly critical of social media for engaging in what he calls selective censorship.

Ted Cruz, in a public hearing with the CEOs of some of the largest social media platforms, called them the greatest threat to the First Amendment that the country sees. And so, Republicans are concerned that social media sites have a liberal bias, and they’re overly flagging, deplatforming, and discriminating against conservative voices.

So, when people talk about reforming Section 230, everyone agrees that Section 230 needs to be reformed. However, if you dig just a little bit deeper, you find that Republicans and Democrats want to reform very different things.

Sam Gandhi:
Randi, we understand where Congress may be landing or may be not landing, but where did the Supreme Court land on this?

Randi Singer:
It’s interesting. I would actually argue the fact that no one’s happy means that it’s working exactly the way it’s intended to. But I think the Supreme Court, you’ve seen a couple of places where it’s had the opportunity to act, and you’ve seen a number of dissents for denials in Section 230 cases, particularly from Justice Thomas, and more recently, from Justice Gorsuch, as well, talking about the need to reform Section 230.

But the few times it’s actually been before the Supreme Court, the Supreme Court has kind of sidestepped a little bit. In last year’s term, there was a case, Gonzalez v. Google, and it involved allegations that social media platforms were liable for terrorist attacks because the platforms allegedly surface terrorist content via their algorithms.

And so, a number of different plaintiffs in two companion cases sued under the Anti-Terrorism Act. Google actually asserted a Section 230 defense, but the Supreme Court never reached it because it decided, in a companion case, that the plaintiffs had actually failed to state a claim under the Anti-Terrorism Act because they failed to allege that the platforms substantially assisted, which is an element of the Anti-Terrorism Act.

This year, it was not squarely before the Supreme Court, but it was talked about, a lot, in the NetChoice cases. The NetChoice cases involved social media laws passed by the state of Texas and the state of Florida, and essentially, the Florida law restricted platforms’ ability to moderate content. It basically said you can’t deplatform certain people, you can’t censor, you can’t shadow ban their content, particularly journalists, and politicians, and political candidates.

And it also had a provision that you have to disclose the standards for content moderation and provide notice and explanation when there’s removal decisions. That went up, and the Eleventh Circuit issued a preliminary injunction, saying it was unconstitutional as a violation of First Amendment law.

Texas passed a very similar law that said you can’t censor content based on viewpoint, which sounds terrific, also included provisions saying platforms need to disclose and explain removal decisions and give an opportunity to appeal. The district court in Texas also enjoined that law as unconstitutional, but the Fifth Circuit reversed.

And so, the circuit split went up to the Supreme Court. The decision was issued on the very last day of the term, and the majority basically said, vacated both judgments and said, neither court conducted the proper First Amendment analysis, and then a majority decision went on to lay it out and explain exactly how that First Amendment analysis would go.

And as part of that analysis, the Supreme Court really, the important piece of that is the Supreme Court noted that ordering a party to provide a forum for somebody else’s views can be a First Amendment violation, and the important takeaway here is that these platforms are private companies with their own First Amendment interests.

So, if you try to tell them what content they have to leave up or what content they have to remove, then you’re impinging on their First Amendment rights of the platform. The majority analogized them to newspapers or cable TVs or parade organizers, people with their own First Amendment rights.

What I think is really interesting is that came out on July 1, and the very next day, the Supreme Court denied cert in a case, another case against Snap, called Doe v. Snap, that involved a case against a platform around use of the app, allegedly, to groom teenagers, and the Supreme Court denied cert. And again, you see Justice Thomas and Justice Gorsuch writing a dissent, saying it was very important for the court to consider Section 230, and they would have taken that case so the court could look at it.

So, even though the majority has, when they’ve had the opportunity to look at Section 230, they’ve declined to do it, but you definitely see an ongoing appetite. And one of these days, the court is going to take one of these cases, I think.

Sam Gandhi:
Do you think they’re really going to do that, or are they just sending a signal that, hey, Congress, you’ve just got to get your act together, we’re not going to help you out here?

Randi Singer:
You know, it’s really interesting. There was a hearing a couple of weeks ago with a proposal to sunset Section 230, and the notes for that say, they didn’t really want to sunset Section 230. They really wanted to provide a deadline. 

So, I almost, I think the court may be coming from there, too. You have a lot of people saying something needs to be done, we’re giving you lots of opportunities, Congress, to do something about it, and if nothing gets done, then we will be forced to act.

Sam Gandhi:
If you’re interested in more information on the energy industry, tune into the next episode of Sidley’s Accelerating Energy podcast, hosted by our partner, Ken Irvin. You can subscribe to Sidley’s Accelerating Energy podcast wherever you get your podcasts.

And now, you’re listening to The Sidley Podcast, and we’re speaking with Sidley partners, Randi Singer and Michael Borden, about the social media cases that have cropped up involving content moderation and what inspired them from a legal, business, and cultural perspective. 

Michael, let’s come back to this last conversation we had about whether the Supreme Court is signaling to Congress that they need to be doing something and not the court. In May, Congress held a hearing to discuss legislation to sunset Section 230. What’s the background on that, and how do you see it playing out?

Michael Borden:
So, the leaders of the House Committee on Energy and Commerce, which is the committee with jurisdiction over the Communications Decency Act and Section 230 have wanted to send a signal to Big Tech that they’re serious about trying to engage in ways to improve the law, and what they found is that many on the other side, the people who support Section 230 as is, have no interest in engaging with a Congress that hasn’t reached a consensus on the topic.

And so, what the leaders of the committee were trying to do is they’re trying to force Big Tech to the table. They’re trying to force them to negotiate and figure out new contours for Section 230 by suggesting that if they don’t come and negotiate in good faith over the next year and a half, they’ll eliminate Section 230 entirely.

Unfortunately, no one really believes that’s a realistic outcome, and because no one sees it as a realistic threat, no one is yet coming to the table. And so, while Congress has announced that they want to act, as I mentioned before, it’s not clear how they want to act, and as Randi’s described, this is a very difficult question, and content moderation is an extraordinarily difficult thing to do.

And so, how do you limit Section 230 without killing a lot of the useful communication that we actually currently have and enjoy on the internet? And that’s what Congress is grappling with. Now, Congress, lately, hasn’t been great at dealing with big and complicated issues, and that’s why they’re trying, here, to impose or self-impose an artificial deadline for fixing Section 230.

I just don’t think it’s likely to work, because I think that there are…we’ve seen dozens. We have seen dozens of bills that have been introduced over the last five to 10 years to modify or amend Section 230, and they often have two very distinct goals. One is limiting Section 230 immunity for hosting another’s content with the goal being incentivizing sites to take down harmful content.

The other would be to limit Section 230 immunity for restricting content, seeking to incentivize the hosting of lawful content, and each of those have slightly different nuances. For example, in the former, it’s really a question about general hosting practices. Like, will you allow for some additional liability if the site promoted the challenged content through a personalized algorithm? That’s a big question that people are really grappling with. 

And on the other hand, if you’re seeking to incentivize hosting of lawful content, they’re talking about potentially removing the general otherwise objectionable category to Section 230, or limiting immunity to decisions that restrict content in a viewpoint-neutral manner.

So, these are big, tough questions, and even if you do all of those things, you’re still going to have a regime where some content is allowed and some objectionable content gets through, and other permissible content gets screened, and people continue to be upset. We have these big platforms with lots of communication happening, and it’s just a very difficult issue to get right.

Sam Gandhi:
Randi, let me come back to the topic we talked a little bit about earlier. Why is the internet different in terms of free speech? What’s the difference between speech online and the example that I used before, somebody just passing out a newsletter in the town square, holding up a sign in the town square? Why is the internet, as a platform, different for this purpose?

Randi Singer:
I’m not sure the internet as a platform is necessarily different, but what’s different in what we’re talking about with Section 230 and with all of these cases, is that you’re talking about private companies with their own First Amendment rights here, and so, you’re talking about laws that, depending on which side you’re on, abridge the free speech of these private actors.

So, you sometimes see platforms described as the new public square. I think that’s not quite accurate because there’s lots of ways you can access the internet without going through a particular social media platform. I’ve seen it described as it’s not the public square, it’s the bar down the street from the public square, where a group of like-minded individuals are gathering.

There was a Reddit comment a couple of years ago, it’s more like open-mic night at the improv, which I really like. It isn’t necessarily the internet that makes the difference. The difference is you’re trying to regulate these private companies, and they’re not common carriers. They’re not public utilities. It’s not a matter of the phone company.

You don’t think of the phone company necessarily as having a viewpoint, but I think we could all agree that you go to a different social media platform expecting something different. I mean, we’re starting to see, you know, to pick on Parlor, which is mostly out of business, you went there for a different reason, expecting different content, than when you go on TikTok.

I mean, I think we can agree that there are viewpoints expressed by these platforms and who goes there, and I don’t think you can force private companies to give a platform to viewpoints that they disagree with, and I think there’s also a problem here because these are companies, as Michael said, content moderation is really hard.

And I think one of the places where the internet is different than real life is that there’s a certain amount of anonymity on the internet, and people can hide, and that anonymity can be a bad thing when you have lots of…it gives people the freedom to express viewpoints that they might not be willing to express in real life, in terms of defamatory content, or hate speech, or various things.

But it can also be a really good thing, where you may have what comes up a lot when you’re talking about these laws, in particular LGBTQ+ speech and teenagers who are looking for more information about particular things that they might be actually in danger if, in real life, they were looking for them.

But the anonymity of the internet enables them to get access to information and speech and share viewpoints that they might actually not be safe in real life expressing. So, I think there’s pros and cons to it, and I’m not sure you can simplify it to just say, oh, it’s different on the internet.

Michael Borden:
Sam, this is an issue that Congress is really grappling with. It’s who is going to be targeted and who is going to be subject to the liability? Are there going to just be blanket reforms to Section 230 so that all interactive computer services are going to be impacted by this? Or are they going to do this in a more targeted way, to go after the public square, or as Randi called it, the bar next to the square? 

Is it just going to be the larger services or social media companies, or is it going to go down to the individual level? These are really tough questions, and in spite of Congress’…or in spite of the House Committee on Energy and Commerce’s self-imposed deadline, I don’t think that they’re ready or they’ve reached a conclusion at this point.

Sam Gandhi:
One thing that we didn’t talk about, a bit, was something that the court alluded to in the NetChoice cases, where they likened these platforms to newspapers. Why isn’t that the simple answer? Newspapers modulate content all the time. 

Sometimes they get sued for it, but they’ve got ostensibly journalistic standards that they adhere to when they create those parameters. Are internet platforms and social media platforms just too big to really impose that kind of parameters over? Is that what makes it so different?

Randi Singer:
I think that’s one of the things that makes it different, but I think another is that the newspapers have their editorial standards, and most of the content that they’re publishing is content that they are sponsoring. They’re the speaker of a lot of it, whereas the social media or the online platforms, it’s content by third parties. And they’re trying to moderate it, to some extent, and even curate it, to some extent, but they are not the speakers, in most cases.

Sam Gandhi:
Given that the three of us are having a hard time pinning down what it really should apply to, what do people think about overreach? And Randi, you speak to a lot of social media companies and platforms. What do they say about those who perceive overreach regarding any kind of amendment to 230 or interpretations of 230 that may be perceived as overreaching?

Randi Singer:
I would say two things to those people. The first thing is be careful what you wish for, because to go back to the point we were talking about earlier and how important origin stories are, remember that 230 came up as the answer to a court case where the platform was liable for all of the content that was on the platform, if it did any sort of moderation.

So, in that world, the platform has two choices. You can do no moderation, at all, which leaves up all kinds of hate speech, and pornography, and disinformation, and things that glorify gender-based violence. You can come up with all kinds of examples there, or you can take everything down and completely take down anything you think is objectionable, which is going to impinge on a lot of speech.

I think, generally, under First Amendment jurisprudence, the cure for free speech is usually more speech, not less speech, and so, you’re leaving platforms with a Hobson’s choice, here, if you take away Section 230 entirely. And when you start to play with the balance of it in terms of overreach, you’re really imposing a viewpoint on the platform, and I think you have different platforms that moderate very differently.

It’s very obvious in the differences in the moderation of Twitter versus X. I mean I think you can very clearly see, there, that there is a viewpoint to the platform and that it does change, depending on the content moderation policies. 

I think the second thing I would say is beware of the law of unintended consequences, because I think the other problem here is you have to remember that Section 230 provides immunity, which means that it is something that can be asserted as part of a motion to dismiss.

And so, a lot of the cases that are brought would fail. A lot of the cases that are brought against platforms would probably fail anyway, whether because they’re under a law and they can’t meet all the elements, because there’s no harm. There’s various reasons or because of First Amendment concerns.

So, a lot of these lawsuits would not be successful, but without 230 or without kind of the breadth of 230 as it currently exists, you have to litigate these cases to get there. You can’t knock them out on a motion to dismiss, and it’s very cheap and easy to sue. 

It’s very expensive and hard to litigate something. So, if you can’t knock it out on a motion to dismiss, I think you’re disproportionately disadvantaging smaller platforms that don’t have the resources to litigate these kinds of cases.

And there’s lots of examples of companies that win cases but go out of business due to the cost of litigation, and so, if the goal, and I think the expressed goal in a lot of cases is to hobble Big Tech in some way, getting rid of 230 or curtailing 230 severely is not the right way to go about it.

Michael Borden:
Sam, in the case of where no good deed goes unpunished, we should look at a recent example of when Congress took action to limit the scope of Section 230 immunity. Congress passed a law called FOSTA, Fight Online Sex Trafficking Act, and FOSTA was designed to protect and to prevent sex trafficking that was occurring online and on sites that had been defending themselves or shielding themselves with Section 230 from liability.

But there, at this point, has so far been no evidence, at all, that FOSTA has reduced sex trafficking in any meaningful way, and some would say it’s actually perhaps made it more dangerous for sex workers and that sex trafficking has gone into even darker corners of the internet, where people are exploited even more viciously and terribly. And so, sometimes, you have to be careful that even though everyone wants to prevent sex trafficking in any way, you’re going to find that sometimes the solutions don’t solve the problem.

Sam Gandhi:
One of the things we didn’t talk about is the international effect here, and the fact that the landscape outside the United States is different. Does that make, really, a difference in this world with kind of geo-blocking, where the First Amendment doesn’t apply outside the United States. 

There’s no reason why the EU or certain other countries…certainly, we’ve seen a lot more censorship, whether it’s the Middle East or in Asia, than we would see in the West. What’s the impact there, and does it not really matter what we end up doing to the extent that there’s applicable laws that apply there?

Randi Singer:
I think there’s a couple of things there. I mean, certainly, in some countries, it is a higher degree of censorship, and you see either actions against the platform or just flat-out bans of the platforms in those countries. I think in other countries, certainly, Germany, for example, has some very stringent anti-Nazi speech laws, and you know, I hesitate to call those censorship.

So, I think we have to be careful in how we frame things, but I do think it matters a lot, first, because most of these platforms are based in the U.S., and a lot of speech is in the U.S., and it is possible to geo-block outside, and I think because, in some ways, we don’t want to go to the lowest common denominator here.

I mean, I think that’s what we’re trying to avoid in either eliminating or curtailing Section 230 too much is to go to a world where you’re so scared about what’s on your platform that you take down all the speech, and I think without some protections in the U.S. and some ability to leave things up, then you do risk going to that lowest common denominator.

But again, a lot of these things disadvantage smaller platforms that don’t have the resources to geo-block, or to make these kinds of distinctions, or to moderate differently depending on which country you’re talking about or what kinds of speech you’re talking about, because to try to be a master of all law, in every single country in the world, is an impossible task.

Michael Borden:
I think that to even try to do that seemingly flies in the face of a fundamental, bedrock American value, which is the right to free speech; and so, in this entire debate and conversation, we’re talking about figuring out ways to limit speech, and so, that’s what we’re all going to…we’d still have to grapple with.

And so, if we were to take on some of these practices that have been adopted by foreign countries, we are, perhaps, sacrificing what has made America unique, and so, I think that it’s very difficult for us to just say let’s make social media and online communities safe spaces where we regulate what can and cannot be said. 

The question, though, is whether or not we’ve allowed these places to become dangerous, and so dangerous that we have to protect the people who are using them and who are on them. And so, it’s not ultimately, I think, Sam, about geo-fencing or limiting speech. It’s just about making sure that people are still able to express their point of view but to do it in a way that doesn’t hurt any other people.

Randi Singer:
And I think you’re seeing a lot of state laws. There was, in fact, last week, a law that passed the Senate but looks like it’s not going to make it out of the House called KOSA, the Kids Online Safety Act. So, you can get to a place where people maybe agree we should be doing something more to protect children, but nobody can really agree on how to do it.

And you keep coming back to the law of unintended consequences, I think, because in order to "protect children," you have to do some sort of age verification, so you can figure out who it is, and I think that’s akin to what you’re talking about with some of the geo-fencing. You lose some of your anonymity when you do that, and especially with respect to age verification. 

You raise a whole host of other privacy issues because then you’re making platforms responsible for checking government ID, or something like that, and storing all of that, which is its own set of issues.

But anything that you do in this space to kind of curtail things really can set off waves in the ocean, right, in terms of going against the core value of free speech and curtailing things way beyond what it’s intended to protect against, whether it’s political speech against a particular candidate, in a particular country, or whether it’s something with children.

Sam Gandhi:
We’re going to wrap up the podcast. Let me start with you, Michael, in terms of what you’re hearing from clients regarding Section 230 and social media issues, more broadly, and then I’ll go to Randi.

Michael Borden:
My clients have been particularly concerned about the way that Congress is treating social media platforms. They want to blame Big Tech, and they’re using Section 230 as a blunt hammer to bash them, and it’s had the effect of forcing executives to come and testify and be berated on Capitol Hill by angry lawmakers. 

They’re concerned not just about those types of appearances, but they’re concerned about what the future of their platforms are really going to look like. They’re concerned that a congressional overreaction could lead to a fundamental transformation of the internet. 

Now, while they generally believe that any real, meaningful change to Section 230 is still a ways away, they are concerned that some sort of outside external event could occur that causes momentum for some sort of a legislative deal that fundamentally transforms the internet and changes their way of doing business, and it changes the internet as we know it.

Sam Gandhi:
Randi, you get the last word. What are your clients telling you?

Randi Singer:
I think a lot of the same things. I mean, Section 230, one of the things it was designed to do was allow the internet to flourish and allow the internet to develop and people to connect with each other and have this whole other virtual world. And I think you saw, during the pandemic, how important some of these connections were and how important some of this technology is.

And I think Section 230, I don’t think its usefulness is at an end, and I agree with Michael that it is part of kind of a larger attack on Big Tech, which I think really begs the point, there’s a lot of issues that you can avoid dealing with if you blame them on Big Tech, and seek to curtail Big Tech without actually solving some of the problems that are giving rise to these concerns.

Sam Gandhi:
We’ve been speaking with Sidley partners Randi Singer and Michael Borden about Section 230 and the controversy surrounding the push to limit its power to protect online platforms. Randi and Michael, this has been a great look at the landscape regarding Section 230 and social media more broadly. Thanks for sharing your insights on the podcast today.

Randi Singer:
Thanks for having us, Sam.

Michael Borden:
Thanks, Sam.

Sam Gandhi:
You’ve been listening to The Sidley Podcast. I’m Sam Gandhi. Our executive producer is John Metaxas, and our managing editor is Karen Tucker. Listen to more episodes at Sidley.com/SidleyPodcast, and subscribe on Apple Podcasts, or wherever you get your podcasts.



This presentation has been prepared by Sidley Austin LLP and Affiliated Partnerships (the Firm) for informational purposes and is not legal advice. This information is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. All views and opinions expressed in this presentation are our own and you should not act upon this information without seeking advice from a lawyer licensed in your own jurisdiction. The Firm is not responsible for any errors or omissions in the content of this presentation or for damages arising from the use or performance of this presentation under any circumstances. Do not send us confidential information until you speak with one of our lawyers and receive our authorization to send that information to us. Providing information to the Firm will not create an attorney-client relationship in the absence of an express agreement by the Firm to create such a relationship, and will not prevent the Firm from representing someone else in connection with the matter in question or a related matter. The Firm makes no warranties, representations or claims of any kind concerning the information presented on or through this presentation. Attorney Advertising - Sidley Austin LLP, One South Dearborn, Chicago, IL 60603, +1 312 853 7000. Prior results do not guarantee a similar outcome.