Menu Close
A line drawing of a smartphone with social media icons on it.
How much power do social media companies have over what users post? Midnight Studio/iStock/Getty Images Plus

Supreme Court kicks cases about tech companies’ First Amendment rights back to lower courts − but appears poised to block states from hampering online content moderation

The U.S. Supreme Court has sent back to lower courts the decision about whether states can block social media companies such as Facebook and X, formerly Twitter, from regulating and controlling what users can post on their platforms.

Laws in Florida and Texas sought to impose restrictions on the internal policies and algorithms of social media platforms in ways that influence which posts will be promoted and spread widely and which will be made less visible or even removed.

In the unanimous decision, issued on July 1, 2024, the high court remanded the two cases, Moody v. NetChoice and NetChoice v. Paxton, to the 11th and 5th U.S. Circuit Courts of Appeals, respectively. The court admonished the lower courts for their failure to consider the full force of the laws’ applications. It also warned the lower courts to consider the boundaries imposed by the Constitution against government interference with private speech.

Contrasting views of social media sites

In their arguments before the court in February 2024, the two sides described competing visions of how social media fits into the often overwhelming flood of information that defines modern digital society.

The states said the platforms were mere conduits of communication, or “speech hosts,” similar to legacy telephone companies that were required to carry all calls and prohibited from discriminating against users. The states said that the platforms should have to carry all posts from users without discrimination among them based on what they were saying.

The states argued that the content moderation rules the social media companies imposed were not examples of the platforms themselves speaking – or choosing not to speak. Rather, the states said, the rules affected the platforms’ behavior and caused them to censor certain views by allowing them to determine whom to allow to speak on which topics, which is outside First Amendment protections.

By contrast, the social media platforms, represented by NetChoice, a tech industry trade group, argued that the platforms’ guidelines about what is acceptable on their sites are protected by the First Amendment’s guarantee of speech free from government interference. The companies say their platforms are not public forums that may be subject to government regulation but rather private services that can exercise their own editorial judgment about what does or does not appear on their sites.

They argued that their policies were aspects of their own speech and that they should be allowed to develop and implement guidelines about what is acceptable speech on their platforms based on their own First Amendment rights.

Here’s what the First Amendment says and what it means.

A reframe by the Supreme Court

All the litigants – NetChoice, Texas and Florida – framed the issue around the effect of the laws on the content moderation policies of the platforms, specifically whether the platforms were engaged in protected speech. The 11th U.S. Circuit Court of Appeals upheld a lower court preliminary injunction against the Florida law, holding the content moderation policies of the platforms were speech and the law was unconstitutional.

The 5th U.S. Circuit Court of Appeals came to the opposite conclusion and held that the platforms were not engaged in speech, but rather the platform’s algorithms controlled platform behavior unprotected by the First Amendment. The 5th Circuit determined the behavior was censorship and reversed a lower court injunction against the Texas law.

The Supreme Court, however, reframed the inquiry. The court noted that the lower courts failed to consider the full range of activities the laws covered. Thus, while a First Amendment inquiry was in order, the decisions of the lower courts and the arguments by the parties were incomplete. The court added that neither the parties nor the lower courts engaged in a thorough analysis of whether and how the states’ laws affected other elements of the platforms’ products, such as Facebook’s direct messaging applications, or even whether the laws have any impact on email providers or online marketplaces.

The Supreme Court directed the lower courts to engage in a much more exacting analysis of the laws and their implications and provided some guidelines.

First Amendment principles

The court held that content moderation policies reflect the constitutionally protected editorial choices of the platforms, at least regarding what the court describes as “heartland applications” of the laws – such as Facebook’s News Feed and YouTube’s homepage.

The Supreme Court required the lower courts to consider two core constitutional principles of the First Amendment. One is that the amendment protects speakers from being compelled to communicate messages they would prefer to exclude. Editorial discretion by entities, including social media companies, that compile and curate the speech of others is a protected First Amendment activity.

The other principle holds that the amendment precludes the government from controlling private speech, even for the purpose of balancing the marketplace of ideas. Neither state nor federal government may manipulate that marketplace for the purposes of presenting a more balanced array of viewpoints.

The court also affirmed that these principles apply to digital media in the same way they apply to traditional or legacy media.

In the 96-page opinion, Justice Elena Kagan wrote: “The First Amendment … does not go on leave when social media are involved.” For now, it appears the social media platforms will continue to control their content.

Want to write?

Write an article and join a growing community of more than 187,200 academics and researchers from 5,000 institutions.

Register now