Frances Haugen’s recent whistleblower testimony regarding Facebook will only stoke the fires of the battle heating up in courts and legislatures over provisions originally addressed in Rule 230 of the Communications Decency Act. Limits placed by private internet companies on individuals and organizations have raised an alarm on both ends of the ideological spectrum.
Justice Clarence Thomas recently stated in a concurring decision that “we will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.” A second issue addressed by the rule—the liability of internet companies for illegal content posted by third-party providers—has till now received less attention but is potentially just as dangerous and will undoubtedly receive more attention after Haugen’s testimony before Congress.
After some reflection, most who approach such issues from a foundation of individual freedom recognize that private companies’ right to choose what and what not to post is very much in line with free markets and appropriately subject to the same rules of competition relied on elsewhere. Concerns about large entities controlling what information the public has access to are alleviated by the right to enter and compete in the market. Traditional avenues for content sharing, such as newspapers and broadcasters, have a long history of competition.
As for social media, the flurry of activity following the riot at the U.S. Capitol is a telling example of what can happen. Twitter, Facebook, Apple, Google, Amazon, Twitch, Snapchat, Reddit, Shopify, and TikTok each took action to eliminate President Donald Trump’s ability to use their platforms to post information. In response to these actions, alternatives emerged. Millions are reportedly turning to alternative sites such as Gab, MeWe, Telegram, and Discord. In general, any bias shown by one competitor creates an opportunity for others to fill the gap. Competition is a powerful force.
Some have also argued that decisions by the internet providers to exclude content is a violation of free speech. It is not. Freedom of speech is guaranteed by the U.S. Constitution as a way of preventing the government from interfering with the rights of individuals and private firms to say what they wish and only what they wish. It is not a tool for requiring private entities to say or to disseminate even what they wish not to.
Often overlooked, but most alarming, is the question of the alternative. If the companies themselves don’t determine what content is biased and what isn’t, what content they believe to be accurate and what isn’t, who will? What is and isn’t acceptable to publish would have to be defined by someone, and that someone would probably be in the government. Whatever the costs are of having companies in control of content, having the government in control presents its own set of dangers.
Requiring platforms to publish whatever the government insists they publish could, for example, risk having the government itself use major media platforms as spokespersons or propagandists for whatever the government happens to favor. Is that what advocates of forced publication wish to promote? When compared to the alternatives, competition comes out looking pretty good as a way of regulating bias. Competition subjects decisions of bias to the judgment of the broad population in the form of the market, as opposed to the judgement of a single individual or small group of individuals (who might themselves be biased) at a regulatory agency.
A more difficult but equally problematic issue addressed by Rule 230 is the immunity provided for internet companies for all content posted on their platforms. Holding companies and individuals accountable for actions that harm others is consistent with traditional views on liberty dating back to J.S. Mill’s argument in On Liberty: “that the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” However, not all who would normally subscribe to libertarian views believe it is appropriate in this case. A recent Cato Institute article defends Rule 230’s immunity for internet companies, stating that “Section 230 leaves the responsibility for online posts with the appropriate agent: the (content provider).”
The costs and benefits of holding internet companies liable for the content they post must be assessed relative to the alternatives. Holding content providers, who may be private citizens merely making comments on a web posting, responsible—not the internet company that provides the platform for the content/comments—relies on the ability of the regulator to monitor content providers and, as mentioned above, creates the possibility that the regulator will use that authority to limit the information available that is damaging to the regulator itself. The regulator is not always going to be a disinterested third party. Despotism thrives on control over information.
Holding the internet company liable would still require the regulator to monitor content; however, the regulator’s job would be easier and less easily used for nefarious purposes. The regulator’s job would be easier because the internet company has an incentive to assist in the monitoring. The internet provider is likely able to do this more efficiently than can the government (more on that below). Internet companies already develop algorithms to monitor content and can be expected to do so more earnestly as the punishment for failure increases.
Further, and importantly, any violation of the law by the internet company is likely to end up in court, where the regulator will present its case and the internet company will have the opportunity to defend itself before a branch of government independent of the regulator. An individual content provider could also go to court under the existing law, but if their resources are more limited, as in the case of a private citizen posting controversial comments, there is a smaller likelihood of them doing so. By consolidating the incentive to challenge a regulator’s allegations of illegal content within a single internet company, as opposed to a diffuse group of content providers, a law that holds internet companies liable for content decreases the potential for a corrupt regulator to restrict content just because it is harmful to the regulator, even when the content is not illegal.
Holding internet companies responsible may be more efficient as well. Ronald Coase introduced the idea that came to be known as the “least-cost avoider.” Coase argued that society is better off if the liability for an action is assigned to the party that can best keep the costs of that action low. What would be the cost of enforcing the law when only the content providers are liable, and how does it compare with the cost of enforcing the law when the internet company is liable? Internet companies will almost certainly prove more efficient than government regulators at developing mechanisms for monitoring content. They’re already doing this—both in the interests of making their platforms more attractive and doubtless out of fear of government regulation—with sophisticated programming designed to identify pornography or threatening material. If they are legally responsible for ensuring that content isn’t published, they can be expected to develop more and more sophisticated methods for doing so.
Internet companies will bristle at the idea of being held liable for third-party content posted on their sites. That isn’t surprising, given the current deal they have—free to publish anything they like and power to restrict what they dislike, with little responsibility for any harm caused by content they permit. There is no doubt that if they are held liable, any given internet provider would have to be more selective about what gets published, or how long illegal content remains on its site, thereby reducing the total amount of content available to the public. Thus, it is possible that imposing liability would reduce the amount of content provided by each internet provider.
However, as long as other internet companies are allowed to enter the market, the total content would not necessarily be less—it could just be dispersed among more outlets. The benefit is that the amount of illegal information about individuals (e.g., libel), events (e.g., threats of violence), or organizations would be lower because the internet providers would have effectively been hired to monitor content.
In short, the same basic principles of private property, freedom of choice, and competition that are relied on in other product and service markets can be used in the market for information as well. And the same basic principles of efficient liability rules can best address the posting and dissemination of illegal content. Recent events, including the decisions by internet companies to eliminate President Trump’s ability to use their platforms to post information and the competitive response of alternative mechanisms suggest that the marketplace is performing just as we would expect based on years of observing competitive behaviors in product markets. It is an imperfect process that is better than any realistically achievable alternative.