What is Section 230?

Section 230 shields internet companies from legal liability for content posted by users.

Let’s say a user posts something defamatory in the comment section of a website. The website operator is free to take down that post or leave it up. Either way, thanks to Section 230, the object of the defamatory comment can’t sue them.

Like so much about the internet, we simply accept this state of affairs as a given. But it wasn’t always so. At Retro Report, in collaboration with Open Sourced by Vox, we made a video about the origins of Section 230. It features a crusading anti-porn senator from Nebraska and Bill Gates explaining how “hip” the internet is (in other words, peak-90s content).

But what you need to know right now is this: In the mid-90s, courts held that internet companies that moderated user content — to keep their sites family-friendly, say — were legally liable for anything their users posted. Conversely, if websites didn’t moderate their content — if they allowed their users to say whatever they wanted, however defamatory — the websites wouldn’t be liable, because they’d never established editorial control over the content.

Congress passed Section 230 to let internet companies off the legal hook so they’d be free to moderate. An entire industry — Facebook, Twitter, YouTube, even Google — grew on that legal foundation.

So what’s the problem?

Well, for one, Section 230 provides legal immunity without requiring any particular kind of content moderation. So, internet companies can kick back, not bother with moderation, and enjoy not being sued.

Consider Pornhub. After Nick Kristof of The New York Times reported that the platform included videos of rape and sexual assault of minors, and Mastercard and Visa stopped allowing their cards to be used on the site, Pornhub changed its policies and removed the vast majority of its content. But it’s not legally liable for those videos: the victims can’t successfully sue them. (One exception would be if the platform facilitated sex trafficking, a carveout of Section 230 immunity from a 2018 law known as SESTA-FOSTA).

That’s why some critics think Section 230 should be revised so that internet companies, if they want immunity, would be required to make good-faith efforts to moderate content responsibly.

But would repealing the law help Democrats, who are worried about disinformation, or Republicans, who want platforms to be politically neutral in their content moderation decisions?

1) Well, it might not help Democrats, actually.

Disinformation and hate speech — kooky conspiracy theories, racist content — are what some legal experts call “awful but lawful.” Saying, for instance, that the Covid vaccine has a government microchip in it might be foolish, but it isn’t illegal. So even if internet companies were liable for such user content, they still couldn’t be sued for much of it.

2) And it would probably help Republicans even less.

Section 230 doesn’t require, as some Republicans claim, that internet companies be politically neutral, and repealing it wouldn’t necessarily make them neutral either. It could actually lead platforms to deactivate accounts that might be a liability.

But getting too wonky about Section 230 risks missing the bigger point about why Democrats and Republicans have both set their sights on it. They seem to be acknowledging, and fighting over, the point with which we started: the internet, our public square, isn’t public. It’s dominated by a few large companies whose decisions, about content moderation and a lot else, have significant consequences for our politics and our future.

So whatever comes of the debate over Section 230, it’s not going to be the end of the debate over Big Tech.

Learn something new from history: Subscribe to our newsletter, and follow us on Twitter @RetroReport.