Logo
U.S. Constitution

When Government Nudges Become Censorship

May 5, 2026by Eleanor Stratton

Most Americans know the First Amendment’s basic idea: the government generally cannot punish you for political speech.

That statement comes with important, narrow exceptions, including limits on true threats, incitement, and certain time, place, and manner rules. It also depends on context. Speech rules can shift in settings like public employment, public schools, and other regulated environments. But as a starting point, it captures what people expect: in the United States, the state is not supposed to pick political winners and losers by silencing speech.

Murthy v. Missouri raised a more unsettling question: what if the government never passes a censorship law at all?

The case focused on whether federal officials crossed a constitutional line by urging or pressuring major social media companies to remove posts or reduce their visibility, particularly during the COVID era. In June 2024, the Supreme Court reversed the lower court’s preliminary injunction, holding that the private plaintiffs did not establish standing, largely because they could not show that their specific moderation injuries were fairly traceable to government conduct or likely to be redressed by a court order.

Even with that outcome, the underlying issue has not gone away. Across the country, similar disputes continue to test the same basic boundary: when officials lean on the privately owned platforms that carry modern political speech, when does that influence become state action under the First Amendment?

A federal appeals courthouse hallway on a busy hearing day, with attorneys walking past security in a real news photography style

Join the Discussion

Why this is hard

The First Amendment restrains the government. It does not restrain private companies.

That principle drives much of today’s online speech conflict. A private platform can adopt its own content rules, enforce them aggressively, and remove political content. In most cases, that is not a First Amendment violation, because private moderation is not automatically government censorship.

So where does the constitutional problem enter?

It enters when government officials are not just speaking in public or sharing their views, but using leverage to push a platform to take content down, label it, downgrade it, or otherwise restrict its distribution. The legal question becomes whether the platform’s action is still “private,” or whether it has become something else: conduct fairly attributable to the government.

What “state action” means

Courts use the phrase state action to draw a boundary line. On one side is private behavior, where constitutional limits usually do not apply. On the other side is government behavior, where the First Amendment does.

Disputes like Murthy turn on whether government involvement is strong enough to convert a platform’s decision into something the Constitution treats as the government’s own.

This is not just technical doctrine. It is the difference between two very different realities:

  • Reality A: The platform makes independent decisions, and officials complain or advocate like anyone else might.
  • Reality B: Officials effectively steer outcomes, so the platform’s moderation reflects government preference rather than independent judgment.

If Reality B is true in a given situation, the constitutional stakes rise quickly, because the First Amendment is designed to stop the government from deciding which political views are acceptable.

What counts as pressure

Not every request is coercion. Government officials can constitutionally speak, criticize, and encourage platforms to act. The line gets harder when the message comes with consequences, or when the overall pattern makes the platform’s choices no longer meaningfully independent.

In these disputes, “pressure” can include things like:

  • Threats, explicit or implied, of regulatory action or investigations.
  • Hints that antitrust scrutiny, contracting decisions, or agency approvals could turn on cooperation.
  • Public condemnation paired with private escalation channels and repeated demands for action.
  • Statements that read less like advice and more like instructions, especially when followed by rapid compliance.

To make that less abstract, the Murthy record and allegations included examples of officials flagging specific posts or accounts, following up in recurring communications, and asking for reports or changes in how certain categories of content were handled. The government, for its part, argued that many of these contacts were routine coordination around public health and misinformation concerns, not censorship.

The point is not that every one of these automatically creates state action. It is that the Constitution cares about whether the government is merely trying to persuade, or whether it is using its power to induce restrictions that the platform would not have chosen on its own.

A person holding a smartphone with a social media app open while a content moderation notice is displayed on screen, photographed in a natural indoor setting

How courts draw the line

Courts use several overlapping ideas to separate lawful advocacy from unconstitutional coercion. Labels vary by case, but the recurring questions look like this:

  • Coercion vs. encouragement: Was there a threat, a command, or materially coercive leverage, or was it closer to a request?
  • Significant encouragement: Did officials materially induce the outcome, making the private decision not meaningfully independent?
  • Joint participation: Were officials and the platform acting together in a coordinated way to target specific speech?
  • Nexus and entwinement: Is the relationship so close in this context that the private action is fairly attributable to the state?
  • Independent reasons: Would the platform likely have taken the same action anyway under its own rules and incentives?

One additional framing helps keep the categories straight: government “jawboning” and government speech are often lawful, even when they are forceful, but coercion or retaliation that effectively compels a platform to restrict speech crosses the constitutional line.

These factors also tie into standing. Even if the government’s behavior looks improper in the abstract, a plaintiff still has to show that their own restriction is traceable to government conduct and that a court order would likely remedy it. That traceability and redressability problem was central to the Supreme Court’s decision in Murthy.

Why informal influence matters

When people think of censorship, they picture a statute, a regulation, or a formal order. Something written down, debated, signed, and enforced.

But many modern speech fights are about influence that never becomes law. That influence can be subtle or blunt, but it has one practical advantage: it is harder for the public to see and harder for plaintiffs to prove in court.

A law produces a paper trail and a clean target for a lawsuit. Informal influence can operate through meetings, calls, and private emails, with officials framing their messages as responsible guidance or urgent public safety concerns. The platform can still say, truthfully, “We are private,” even if the surrounding context suggests the decision was not fully independent.

This is why the state action question keeps returning. It is about whether constitutional protections can be eroded in practice without the government ever passing anything that looks like a censorship law.

The question ahead

The constitutional dividing line is simple to state and difficult to police: when does government involvement in platform moderation become state action under the First Amendment?

Officials are allowed to speak and to try to persuade. The First Amendment problem begins when persuasion starts to look like control, including threats of adverse consequences or pressure that makes a platform’s moderation no longer meaningfully independent. Courts evaluate that line using the totality of circumstances, and the answer can be highly fact-specific and platform-specific.

For readers trying to track the stakes, it helps to keep two facts in view:

  • Most modern political speech runs through privately owned platforms.
  • The First Amendment limits the government, not private companies, unless private conduct is fairly attributable to the state.

The unresolved space between those facts is where the hardest cases live. The enduring question is not whether platforms can moderate. They can. The question is whether government officials can steer political speech outcomes through behind-the-scenes leverage while still claiming, legally, that they never censored anyone at all.

If you want a single takeaway, it is this: the First Amendment is tested not only by laws, but also by relationships and pressure. The hard part is proving when influence becomes state power.