There is a compelling reason that the Supreme Court has regularly ruled that falsehoods are protected speech. The Court openly recognizes that falsehoods can be harmful and may sometimes be quite harmful, but the Court also recognizes that efforts to determine which information is true and which is false are far more harmful to our democracy. The line between whether content can be labeled true or false, or whether it is simply viewpoint disagreement can be blurry and very much in the eye of the beholder. This is especially true of political content and policy debates. This is also the fundamental premise of the First Amendment, which protects free speech and free press.
Hillary Clinton said on CNN that Section 230 should be repealed and online platforms forced to moderate content or “we lost total control.” Barack Obama said in his speech to the Stanford Internet Observatory that Section 230 should be revised and replaced with a “smart” regulatory system that will slow the spread of “harmful” content. In his speech, Obama also lauded the peer-reviewed Anthony Fauci announcements on COVID vaccines, which turned out to be false, possibly due to Fauci’s NIH partial funding of the gain of function research at the Wuhan Labs where the COVID virus originated. Tim Walz has repeatedly said that misinformation and hate speech should be subjected to regulatory restrictions. Kamala Harris herself has a long history of calling for the government to gain new regulatory powers over the content moderation policies of online platforms. These leaders of the Democratic political party appear to believe that promoting government regulation of online platforms is a winning campaign issue.
Those who wish for regulatory power to ensure “politically correct” content moderation need to answer these fundamental questions: Should the political party who temporarily runs the government be allowed to act as arbiter of what’s true or false, such as effectiveness of COVID vaccines? Should that political party be able to censor opinions or viewpoints that disagree with their own narrative, such as over-estimated employment numbers, or censor information about the number of identified criminals allowed to cross the border from Mexico into the US? No wonder people no longer trust government information from mainstream media.
How will such regulatory power work if the governing political party in the White House switches every four or eight years and the rules dramatically change when a new political party wins? Today, private companies acting as news organizations have their own free speech rights to publish and label their own opinions as true and opposing opinions as false. This works as long as there are multiple competing news companies such as MSNBC, ABC, CNN, Politico, NYT, The Hill, WaPo, WSJ, FoxNews, NewsMax, and more recently, SubStack, The Free Press, and other sites. Opinion polls reveal that the public clearly understands that the traditional media companies exhibit political bias that determines which news stories to publish and drives their efforts to promote particular political viewpoints and candidates.
Similarly, online social media platforms have their own free speech rights to moderate content. This works until online social media becomes a monopoly or near-monopoly, such as Google for online search, Google’s YouTube for online video sharing, and Meta’s Facebook/Instagram/WhatsApp/Messenger platforms for social media. When these platforms become a near-monopoly, and they replace all other media as the town square for opinions, viewpoints, debates, disagreements, and elections – this becomes a national policy issue to ensure that both online safety and viewpoint neutrality are part of the content moderation policies of these companies.
Rather than attempting to legislate definitions of online safety and viewpoint neutrality, which seems exceedingly difficult in the current deeply divided partisan environment of Washington, D.C., there is another simpler solution.
The simple solution is to mandate full and detailed transparency of:
1. All enforcement actions taken by the online platforms
a. including the specific content categories and usernames of such actions (usernames who opt-in to avoid privacy concerns).
b. including reasons for such actions, such as online safety, government requests, etc.
2. All communications between government and government-funded entities and the online platforms (except for national security and law enforcement actions).
Such transparency would allow the online platforms to be compared on a peer-to-peer basis for online safety and viewpoint neutrality. Such transparency would also shine the harsh light of publicity on all government efforts to influence online platforms, whether such efforts are to “gently inform,” “politely request,” “strongly urge,” or ultimately demand the online platforms to make the content moderation decisions preferred by the government narrative.
With this simple transparency mandate, all content rules and enforcement actions would be allowed, but the platforms and the then-current government administration would fully understand that all decisions and actions will be published so that media, academics, and the public can measure and compare each platform’s content moderation policies in terms of online safety and viewpoint neutrality.
Bottom line – allowing the government to act as arbiter of which online content is true or false is dangerous. The Supreme Court has evaluated this before and recognized the threat to opposition viewpoints, unpopular opinions, and new information that changes the consensus view of “truth” in an effort to cleanse the information available to the unwashed masses of wrong-think.
Mike Matthys, Co-Founder of Institute for a Better Internet.
Join the conversation as a VIP Member