Menu Close
A white stone building with pillars on the portico and a U.S. flag on a flagpole out front.
Clouds float over the Supreme Court building on March 15, 2024. Celal Gunes/Anadolu via Getty Images

Supreme Court’s questions about First Amendment cases show support for ‘free trade in ideas’

This term, the U.S. Supreme Court has heard oral arguments in a total of five cases involving questions about whether and how the First Amendment to the Constitution applies to social media platforms and their users. These cases are parts of a larger effort by conservative activists to block what they claim is government censorship of people who seek to spread false information online.

The most recently heard case, on March 18, 2024, was Murthy v. Missouri, about whether the federal government’s direct communication with social media platforms, specifically about online content relating to the COVID-19 public health emergency, violated the First Amendment rights of private citizens.

The case stemmed from the Biden administration’s efforts to combat misinformation that spread online, including on social media, during the pandemic. The plaintiffs said White House officials “threatened platforms with adverse consequences” if they didn’t take down or limit the online visibility of inaccurate information – and that those threats amount to the unconstitutional suppression of free speech from private individuals who shared content that contained debunked conspiracy theories and contradicted scientific evidence.

It is not uncommon for government officials to informally pressure private parties, like social media platforms, into limiting, censoring or moderating speech by third parties. As Justice Amy Coney Barrett seemingly implied during the Murthy v. Missouri oral arguments, “vanilla encouragement” by government officials would be constitutionally permissible. But when the informal pressure turns into bullying, threats or coercion, it may trigger First Amendment protections, as the Supreme Court ruled in another case called Bantam Books v. Sullivan, from 1963.

But the Biden administration said its effort to fight COVID misinformation was normal activity, in which the government is allowed to express its views to persuade others, especially in ways that advance the public interest.

Two men in suits stand in a room with screens and flags.
President Joe Biden and Surgeon General Vivek Murthy attend a meeting in 2022. Kevin Dietsch/Getty Images

Several justices seemingly agreed with the Biden administration and accepted its view that ordinary pressure to persuade is permissible.

More broadly, the Supreme Court has wrestled with the application of the First Amendment to cases involving social media platforms. Earlier this term, the court heard several cases that involved content moderation – both by the platforms themselves and by public officials using their own social media accounts. As Justice Elena Kagan put it during one round of oral arguments: “That’s what makes these cases hard, is that there are First Amendment interests all over the place.”

Perhaps most fundamentally, the court seeks to evaluate the relationship between social media platforms and public officials.

A public official or a private social media user?

On March 15, the Supreme Court released its unanimous decision in Lindke v. Freed – another case involving social media platforms. The issue in that case was whether a public official can delete or block private individuals from commenting on the official’s social media profile or posts.

This case involved James Freed, the city manager of Port Huron, Michigan, and Facebook user Kevin Lindke. Freed initially created his Facebook profile before entering public office, but once he was appointed city manager, he began using the Facebook profile to communicate with the public. Freed eventually blocked Lindke from commenting on his posts after Lindke “unequivocally express(ed) his displeasure with the city’s approach to the (COVID-19) pandemic.”

The court ruled that on social media, where users, including government officials, often mix personal and professional posts, “it can be difficult to tell whether the speech is official or private.” But the court unanimously found that if an official possesses “actual authority to speak” on behalf of the government, and if the person “purported to exercise that authority when” posting online, the post is a government action. In that case, the official cannot block users’ access to view or comment on it.

The court ruled that if the poster either does not have authority to speak for the government, or is not clearly exercising that authority when posting, then the message is private. In that situation, the poster can restrict viewing and commenting because that is an exercise of their own First Amendment rights. But when a public official posts in their official capacity, the poster must respect the First Amendment’s limitations placed on government. The court sent a similar case, O'Connor-Ratcliff v. Garnier, back to a lower court for reconsideration based on the ruling in the Lindke case.

An illustration of a person surrounded by phone and computer screens spouting all manner of information and noise.
Online information can be a cacophony from which it is hard to discern truth and accuracy. Nadezhda Kurbatova/iStock / Getty Images Plus

Who controls what’s online?

At the root of the plaintiffs’ claims in both these cases is content moderation – whether a public official can moderate another user’s content by deleting their posts or blocking the user, and whether the federal government can interact with social media platforms to mitigate the spread of debunked conspiracy theories and scientifically disprovable narratives about the pandemic, for instance.

Ironically, though conservatives argue that the federal government cannot interact with the social media platforms to influence their content moderation, Florida and Texas – states governed by Republican majorities in the statehouse and Republican governors – enacted state laws that seek to restrict the platforms’ own content moderation.

While the laws in each state differ slightly, they share similar provisions. First, both laws contain “must-carry provisions,” which “prohibit social media platforms from removing or limiting the visibility of user content in certain circumstances,” according to the Knight First Amendment Institute at Columbia University.

Second, both laws require the social media platforms to provide individualized explanations to any user whose content is moderated by the platform. Both laws were passed to combat the false perception that the platforms disproportionately silence conservative speech.

The Florida and Texas laws were challenged in two cases whose oral arguments were heard by the Supreme Court in February 2024: Moody v. NetChoice and NetChoice v. Paxton, respectively. Florida and Texas argued that they can regulate the platforms’ content moderation policies and processes, but the platforms argued that these laws infringe on their editorial discretion, which is protected by well-established First Amendment precedent.

During oral argument in both cases, the justices appeared skeptical of both laws. As Chief Justice John Roberts stated, the First Amendment prohibits the government, not private entities, from censoring speech. Florida and Texas argued that they enacted these laws to protect the free speech of their citizens by limiting the platforms’ ability to moderate content.

But social media users do not have any First Amendment protections on the platforms, because private entities, like Facebook, are free to moderate the content on their platforms as they see fit. Roberts was quick to respond to Texas and Florida: “The First Amendment restricts what the government can do, and what the government’s doing here is saying you must do this, you must carry these people.”

Where are the online boundaries of free speech?

Collectively, these cases demonstrate the Supreme Court’s interest in defining the boundaries of First Amendment protections as they relate to social media platforms and their users. Moreover, the court seems focused on establishing the limits of the relationship between government and social media platforms.

The justices’ questions during the NetChoice cases suggest that they are skeptical of government regulation that forces social media platforms to carry certain content. In this way, the justices seem poised to affirm the principle that government cannot directly or formally force an individual or, in this case, a private company, to convey a message that it does not wish to carry.

But the justices’ questions during Murthy v. Missouri seem to suggest that it is not a violation of the First Amendment for government officials to informally interact or communicate with social media platforms in an attempt to persuade them not to carry material the government dislikes.

Considering all of these cases together, the court seems posed to further promote a robust “free trade in ideas,” which was a theory first invoked in 1919 by Justice Oliver Wendell Holmes in Abrams v. United States. In Lindke v. Freed, the court identified the distinction between private speech on social media platforms by a public official, which is protected by the First Amendment, and professional speech, which is subject to First Amendment limitations that protect others’ rights.

In the NetChoice cases, the court seems ready to limit a state’s ability to directly compel social media platforms to convey messages that they may moderate. And in Murthy v. Missouri, the justices seem ready to affirm that while indirect compulsion may be unconstitutional, ordinary pressures to persuade social media platforms are permissible.

This promotion of a robust marketplace of ideas appears to stem from neither giving the government extra powers to shape public discourse, nor excluding government from the conversation altogether.

Want to write?

Write an article and join a growing community of more than 182,600 academics and researchers from 4,945 institutions.

Register now