In the AI age, some of our oldest constitutional questions are returning in unfamiliar clothing.
A plaintiff has asked a court to order OpenAI to cut off a particular person from ChatGPT, prevent him from creating new accounts, and notify her if he tries to get back on. The allegations behind the request are harrowing: months of stalking and harassment, threats, and AI-generated materials used to intimidate and isolate. The user was arrested on four felony counts in January 2026, including communicating a bomb threat and assault with a deadly weapon. A criminal court found him incompetent and ordered him committed. He was later ordered released because of a procedural failure involving delays in transferring him from jail to a mental health facility.
Those facts make the instinctive response easy: if a tool is being used to fuel a dangerous spiral, shut off the tool.
The constitutional response is harder. Because the question is not whether OpenAI may choose to deny service. It is whether the government, through a judge, can command a private platform to block a particular individual from a general-purpose system that generates and transmits speech.
Join the Discussion
The speech issue inside a safety ask
The First Amendment is often treated as a shield for speakers against government censorship. But it also operates as a set of structural brakes on what courts can order when the remedy would suppress speech before it happens.
An order requiring OpenAI to block someone from ChatGPT is not a fine after the fact or a damages award after a trial. It is a restriction on future communication. That is why the concept of prior restraint is at least in the neighborhood here. Not because the label automatically settles anything, but because the practical effect of the remedy would be to preemptively cut off a channel of expression based on feared future misuse.
Not just about newspapers
Prior restraint doctrine grew up around licensing schemes and injunctions against publication. But the principle is broader: the state generally should not get a veto over speech in advance unless it meets a very high constitutional bar.
ChatGPT is not a printing press, and it is not a public park. But it is a communications tool. Ordering a company to deny that tool to a particular person can look less like a routine tort remedy and more like the state picking which citizens get access to a modern channel of expression.
State action through a private lever
OpenAI can choose to suspend accounts, ban users, and enforce policies. That is private discretion. The First Amendment typically does not stop private actors from moderating their own services.
But once a court order enters the picture, the posture changes. A judicial command to block a user is the government acting. And government action that restricts speech triggers constitutional scrutiny even when carried out by a private intermediary.
Danger and the First Amendment
The strongest argument for a court-ordered cutoff is also the most intuitive: the user is not accused of controversial opinion. He is accused of harassing conduct and threats, with alleged AI assistance. The plaintiff describes communications including a text message saying, "Who is going to kill you?" and alleges that a death threat was encoded through ChatGPT and sent to her family.
Threats and stalking are not protected speech in the same way that political advocacy is. If a person is using communications tools to commit crimes, courts can impose restrictions tied to criminal proceedings, probation, protective orders, and detention.
But that does not automatically solve this case. It shifts the question to a narrower one: what process and what forum are required before the government can impose a communications ban?
Criminal restraints vs civil shortcuts
If a criminal court finds that someone poses a danger, it can impose conditions on release. That can include limits on contacting specific people, using specific devices, or accessing certain platforms. Those restrictions ride along with a deprivation of liberty that has its own procedural protections.
A separate civil proceeding that seeks an ex parte temporary restraining order can be a much thinner process. The user whose speech is being restricted may not be present. He may not even be notified in time to contest the request. That raises due process alarms, because speech restrictions are most constitutionally suspect when imposed without the targeted speaker being heard.
Due process and the missing party
It is easy to focus only on the plaintiff and the platform. But a cutoff order is not really aimed at OpenAI. It is aimed at the user. The company is just the lever.
That creates a basic constitutional tension: can the state deprive a person of access to a communications system through an order in a case where that person is not a party and has no meaningful opportunity to respond?
Even if you believe the person is dangerous, due process asks the same stubborn questions:
- Notice: did the person receive timely notice of the restriction sought?
- Opportunity to be heard: could he contest the factual claims and proposed remedy?
- Narrow tailoring: is the restriction limited to preventing illegal conduct, or does it sweep in lawful speech too?
- Duration and review: how long does the order last, and what mechanism exists to challenge it?
Those questions do not disappear because the speech tool is new.
Is ChatGPT a "place" for speech?
There is a reason the Supreme Court has treated access to major communications platforms as constitutionally significant in recent years. The modern public square is not one physical location. It is an ecosystem of privately owned services where ordinary people speak, organize, and gather information.
That does not mean everyone has a constitutional right to any particular private service. But it does mean that broad government restrictions on access to major communication channels can trigger heightened scrutiny, especially when they are not limited to a specific victim or a specific illegal act.
A court order that says, in effect, "you cannot use a general-purpose speech system at all," can start to look less like a targeted protective measure and more like a communications ban.
What OpenAI says in response
OpenAI opposes the emergency request on several grounds that matter for both law and practicality.
- It says it has already done what it can: OpenAI says it has suspended the user's accounts and argues that a TRO is unnecessary.
- A full cutoff may not be technically possible: OpenAI says that because a limited version of ChatGPT can be accessed without an account, it cannot prevent the user from accessing any form of the ChatGPT services.
- The TRO standard still applies: OpenAI argues that a TRO requires a showing of a likelihood of success on the merits of the underlying claims, and that such a showing has not been made and cannot be made in this abbreviated proceeding.
- Unsettled, novel questions: OpenAI argues that the claims pose difficult questions, including around causation and the application of the First Amendment and Section 230 of the Communications Decency Act, and that the application does not grapple with those complexities.
Even if you end up believing the plaintiff should win, these points sharpen the underlying tension: courts are being asked to issue speech-adjacent emergency orders in a setting where the technology is not fully controllable and the merits are not fully litigated.
The transcripts demand and privacy
The TRO application also seeks more than a service cutoff. It demands that OpenAI provide all the information in its possession about the absent third party user, including his ChatGPT transcripts, to plaintiff's counsel.
OpenAI argues that the application does not even identify irreparable harm tied to getting these materials immediately. It contends the requested materials are stale, at least three months old if not more, and that the plaintiff has already been able to obtain law enforcement and court protection without them, including an outstanding warrant for misdemeanor electronic harassment and stalking and an Emergency Protective Order.
OpenAI also argues that producing such private materials now would cause irreparable harm to the absent third party, especially where he has not been added as a party and has not, as far as OpenAI is aware, been given notice and an opportunity to be heard before the materials are released to his former romantic partner. It points to potential statutory protections, including the Stored Communications Act, and argues that these issues should be addressed through ordinary discovery and in the ongoing Judicial Council Coordinated Proceeding that was created to provide consistent answers across cases raising similar, difficult questions.
This part of the dispute underscores a broader theme: emergency proceedings tempt courts into making high-impact decisions about speech, access, and privacy on a thin record, with the most affected person not in the room.
Liability in the AI age
The plaintiff’s civil claims include negligent entrustment, negligence, product design defect, failure to warn, and unlicensed psychological counseling. In the emergency request, the focus is on negligence theories such as design choices that allegedly validated delusions, a failure to warn about flagged activity, and reinstating access after internal safety flags.
Even setting constitutional questions aside, these cases run directly into two hard liability problems for general-purpose AI systems.
1) Causation
Courts are comfortable with "but for" causation when a defendant hands someone a defective product that physically malfunctions. They are far less comfortable when the alleged harm flows through a user’s independent choices, especially choices that involve speech, interpretation, and intent.
The legal system will have to decide what it means to say an AI "caused" harassment when a human being used it as a tool in a broader campaign.
2) Intermediary immunity and speaker responsibility
Federal law has long limited platform liability for user-generated content in many contexts. But AI complicates the familiar categories: is the system merely transmitting user content, or is it generating new content as its own output? What happens when a user prompts a system to draft a threat, a defamatory report, or a message designed to intimidate?
Those questions matter because the stronger the civil liability case becomes, the more likely courts will be asked to impose forward-looking remedies like service cutoffs. And forward-looking remedies are where free speech and prior restraint doctrine can reappear with force.
Less fraught options
There are ways to pursue safety that can fit more cleanly inside constitutional guardrails than a blanket court-ordered denial of access to a speech tool.
- Victim-specific no-contact orders: Orders barring the user from contacting the plaintiff and her family, including through third parties, are common and can be enforced without banning all speech.
- Device and account conditions in criminal proceedings: If the user is under supervision, conditions can be tailored and enforced with clearer procedural protections.
- Platform voluntary enforcement: A company can suspend accounts and add safeguards based on its own policies without invoking state action.
- Narrowly tailored injunctions: If an injunction is sought, it can focus on specific unlawful conduct, not general access to a broad communications tool.
None of these options are perfect. But the Constitution often pushes us toward imperfect tools that respect process over perfect tools that skip it.
The real question
This is not a referendum on whether harassment is serious. It is. It is not a referendum on whether AI can intensify delusion, obsession, or violence. It can.
The constitutional question is more structural: do we want courts to have a new kind of lever, one that can be pulled in emergency proceedings, aimed at cutting an individual off from a general-purpose speech system, without the individual present?
Once that lever exists for one frightening case, it will be requested again in less clear ones. That is why the First Amendment treats prior restraints as presumptively suspect, and why due process insists on hearing from the person whose liberty, speech, privacy, or access is being restricted.
The AI age is forcing an old choice back onto the table: we can build safety through individualized, procedurally grounded restraints on unlawful conduct, or we can build it through broad preemptive access bans that feel efficient in emergencies. Our constitutional tradition has long warned that the second option can age badly.