The debate continues over whether companies can be held responsible for third-party posts made on their internet-based platforms, only now there’s a new twist. What about information that comes from AI (Artificial Intelligence)? How much liability, if any, should companies bear for AI-generated content that appears on their sites? The No Section 230 Immunity for AI Act, first introduced in June, may soon give us an answer.
Should Companies Be Liable for AI Content?
Section 230 is part of the Communications Decency Act of 1996 and is designed to protect companies from being sued over content posted by third parties. For example, X cannot be sued for content a user posts. Neither would it be liable for not removing harmful content. Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) want to narrow that definition a bit with their proposal, claiming that including AI in Section 230 would shield tech companies from being held accountable for providing harmful information.
“We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” Hawley warned. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”
“AI companies should be forced to take responsibility for business decisions as they’re developing products – without any Section 230 legal shield,” Blumenthal argued. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era.”
Section 230 faced considerable controversy over the past decade or more as the internet grew and became an integral part of society. Some argue that this statute is the internet’s foundation, and to alter or reject it would stifle free speech. Lawmakers want to amend it so that Big Tech can be held accountable for user-generated content, and now AI information as well.
R Street Institute Senior Fellow Adam Thierer told the Daily Caller that conservatives should not advocate removing Section 230 because “it will come back to haunt them in the future.” He continued:
“Some conservatives keep pushing to revoke Section 230 as a way to lash out against tech companies that they feel are biased against their perspectives. But this is a misguided strategy, especially when it comes from supporters of Donald Trump, a man who owes his success to his ability to evade traditional media outlets and use new digital platforms to build a massive following and win the White House.”
One of the problems with the proposed AI bill is that it’s too vague, according to skeptics. Jared Schroeder, a professor of media law at the University of Missouri School of Journalism said: “If this bill were passed, what would not be protected here? The entire functioning of the social media world is algorithms. Algorithms are not technically AI, but algorithms are used to create AI.”
Now Hawley is pushing for unanimous approval of his bill, which Sen. Ted Cruz (R-TX) said would go around the committee hearings that all legislation is subject to. “AI is an incredibly important area of innovation,” he remarked. “Simply unleashing trial lawyers to sue the living daylights out of every technology company for AI, I don’t think that’s prudent policy.”
Another concern for lawmakers is that if the US tries to stifle artificial intelligence then we would fall behind other countries, especially China, in AI advances. Cruz said:
“It would be bad for America if China became dominant in AI. Right now, the $38 billion that was invested this past year in American AI companies is more than 14 times the investment of Chinese AI companies. We need to keep that differential. We need to make sure that America is leading the AI revolution.”
“We have seen what [Big Tech companies] do with their subsidy from government when it comes to social media […] [Big Tech companies] censor the living daylights out of anyone they don’t like […] This government protects [Big Tech].” Hawley posted on his website. “[This bill] just says that these huge companies can be liable like any other company—no special protections from government […] It just breaks up the Big Government, Big Tech cartel. That’s all it does, and it says parents can go into court, same terms as anybody else, and make their case.”
AI makes what has always been something of a gray area even murkier: Is social media a publisher, distributor, or provider? As it stands, it falls under internet service providers (ISPs) liability, which means Facebook, X, and other social media platforms have broad immunity for content on their sites. The Bipartisan Policy website defines the three types of legal liability for content providers as:
Publisher: Includes newspapers, books, magazines, and advertisements.
Liability: “Publishers face strict liability for illegal content or defamatory statements that they display. Publishers can be held just as responsible as the original authors because publishers make decisions over what content goes into their products and with ‘editorial control’ comes increased liability.”
Distributor: Includes newsstands, bookstores, libraries, and retailers.
Liability: “Distributors are generally not held liable for the material they deliver or transmit because they have little control over what they republish. If distributors had to screen every publication before it’s sold or displayed it would result in excessive censorship.”
ISPs: Includes search engines, websites, social media platforms, and online marketplaces.
Liability: “ISPs are broadly immune from liability for third-party content… ISPs are granted immunity regardless of whether they blocked or removed content, minus exceptions for illegal content… Given the internet’s scale and volume of content, Section 230 ensures platforms aren’t overburdened with content-takedown responsibilities.”
However, all of these definitions, and Section 230 itself, were designed when the internet was much different – and the web was regulated to protect companies from human user content in chat rooms. Some argue social media platforms like Facebook and X should fall under the publisher’s liability clause because they do censor and remove posts while others say doing so would create even more censorship. Now the question is how to account for AI – something no one considered when the rules were being written.