Executive Summary
The question of whether AI chatbot output qualifies as protected speech under the First Amendment remains unresolved but is now being actively litigated. The scholarly debate presents three distinct positions, while the first significant judicial treatment—Judge Conway's May 2025 order in Garcia v. Character Technologies—declined to hold that AI output constitutes speech, applying Justice Barrett's Moody v. NetChoice concurrence framework.1
Part I: The Scholarly Debate
A. Pro-Protection Position (Volokh, Lemley & Henderson)
Core Thesis: AI output should receive First Amendment protection, primarily through listener rights rather than speaker rights.2
Three Arguments for Protection:
- Creator Rights (Weakest): AI creators make expressive choices through training data selection, fine-tuning, and alignment.
📄 "Humans are less directly involved in communicating via foundation models, which are generating text that the humans have not necessarily written or even seen."
- Listener Rights (Strongest): "The First Amendment protects 'speech' and not just speakers."2 Courts have long recognized rights "to hear" and "to receive information and ideas."
Key Precedent: Kleindienst v. Mandel, Stanley v. Georgia, Lamont v. Postmaster Gen.
- User Rights to Speak: AI programs are "tools for creating user speech, and therefore as protected by the First Amendment as are cameras and voice and video recorders."2
B. Anti-Protection Position (Salib)
Core Thesis: AI outputs are NOT protected speech because no one with First Amendment rights communicates when AI generates output.3
📄 "When a generative AI system—like ChatGPT—outputs some text, image, or sound, no one thereby communicates."
Why Not Creator Speech? AI creators are not trying to make software that says anything in particular. Nearly all outputs will be on topics creators know nothing about.
Why Not User Speech? ChatGPT's outputs are not copies of a user's message. Users prompt LLMs for AIs to output something new.
C. Constructionist Position (Kaminski & Jones)
Core Thesis: The question "Does AI speech disrupt the law?" is the wrong question. Law and technology co-construct each other.4
Key Insight: "No one feature of speech-generating AI systems is driving 'disruption' of the legal system."
Part II: The Barrett Framework
Justice Barrett's Moody v. NetChoice Concurrence (July 1, 2024)
The Expressive Choice Test:5 When a platform creates an algorithm to remove posts supporting a particular position, "the algorithm simply implements human beings' inherently expressive choice."
📄 "What if a platform's owners hand the reins to an AI tool and ask it simply to remove 'hateful' content? If the AI relies on large language models to determine what is 'hateful' and should be removed, has a human being with First Amendment rights made an inherently expressive choice?"
The Attenuation Concern: "Technology may attenuate the connection between content-moderation actions and human beings' constitutionally protected right to 'decide for [themselves] the ideas and beliefs deserving of expression.'"
Part III: The Conway Order—First Judicial Application
Garcia v. Character Technologies, Inc.
M.D. Fla., Case No. 6:24-cv-1903-ACC-UAM, Order on Motion to Dismiss, May 21, 20256
Conway's Reasoning
1. Application of Barrett Framework: Conway found Barrett's Moody concurrence "instructive" and concluded: "Character A.I.'s output appears more akin to the latter"—meaning output lacking human expressive choice.
2. Expressive Conduct Analysis: "The purpose of the expressive conduct test is to determine whether conduct is sufficiently similar to speech so as to warrant First Amendment protections."7
3. Third-Party Standing: Despite refusing to call AI output "speech," Conway held Character Technologies CAN assert First Amendment rights of users.
4. Product vs. Service: Character A.I. IS a product for product liability purposes.
Part IV: Addressing Research Concerns
Concern 1: Did Conway Reason Backwards?
Conway's reasoning follows Barrett's framework rather than starting from predetermined conclusion.
Concern 2: Downstream Interpretation Risk
Mitigating: Conway's holding is narrow ("not prepared to hold" not "never speech"). Amplifying: Conway's application of Barrett may become template.
Concern 3: Foreseeability-Liability Connection
Concern 4: Barrett Concurrence as Interpretive Key
Confirmed: Barrett's framework proved determinative in Conway's analysis.
Concern 5: Brandenburg Test Application
Status: ✔ Addressed in Part VI below.
Part V: Open Questions
- What if AI implements specific human editorial choices?
- Does listener rights theory provide alternative path to protection?
- How does product/speech distinction apply to other AI systems?
- Degree of human involvement threshold for "expressive choice"?
- Foreign ownership implications per Barrett's concurrence?
Part VI: The Brandenburg Problem—Incitement and AI Speech
A. Brandenburg Test Overview
The Supreme Court's 1969 decision in Brandenburg v. Ohio established the modern incitement standard. Under Brandenburg, speech loses First Amendment protection only when:8
- Intent: The speech is "directed to inciting" lawless action
- Imminence: The lawless action must be "imminent"
- Likelihood: The speech must be "likely to produce such action"
📄 "Brandenburg v. Ohio introduced a new paradigm for evaluating incitement by looking at whether the speech was intended to induce imminent lawless action by third parties and whether such action was likely to occur."
B. AI Application Challenges
The Intent Problem
Brandenburg requires that speech be "directed to inciting"—implying specific intent. AI systems present a fundamental challenge.9
📄 "We sometimes use ordinary English terms that generally connote intent, as we do when we say AI 'lies' or 'hallucinates.' But AI is not sentient."
The Imminence Problem
Brandenburg's imminence requirement assumes a temporal relationship between speech and action that AI complicates.8
📄 "Incitement is speech advocating for the audience to engage in imminent lawless conduct against a third party."
The Likelihood/Scale Problem
Perhaps uniquely, AI may increase the likelihood prong's applicability through scale.9
📄 "Large Language Models Can Be Used to Effectively Scale Spear Phishing Campaigns" and "when used by malicious actors, these models can also be used to generate and disseminate disinformation at scale."
C. True Threats Doctrine After Counterman
The 2023 Supreme Court decision in Counterman v. Colorado established that the First Amendment's "true threats" exception requires at least a recklessness standard.10
📄 "In Counterman v. Colorado, the Supreme Court made clear that when a person is criminally prosecuted for making online threats, the state must show that the defendant subjectively understood that they were putting other people in fear for their safety."
Counterman v. Colorado, 143 S. Ct. 2106 (2023) held that the "true threats" exception required a showing of at least recklessness.11
Application to AI: AI cannot "subjectively understand" anything. Recklessness requires awareness of risk—can this be imputed to developers? The Counterman standard may effectively immunize AI-generated threats unless courts impute mental state to human actors.
D. Liability Frameworks
📄 "It is not obvious that existing liability doctrines are currently capable of easily dealing with harmful speech from AI, nor are all designs for generative AI created equal in the immunity or liability analyses."
Section 230 Complexity
AI raises thorny questions about Section 230 immunity. Whether Section 230 immunizes AI companies depends on whether AI-generated content is treated as content "provided by another information content provider" (immune) or content created by the platform itself (not immune).
First Amendment Exceptions
Even within recognized exceptions, "the First Amendment and internal limits within tort law are likely to present barriers to liability for injury based on inaccurate information."
E. Implications for AI Developers
The Protection Gap
📄 "While these examples may be undesirable from a product or ethical perspective, some of these examples are not necessarily legally actionable, whether because of the First Amendment or tort liability rules."
Coded Speech and Plausible Deniability
📄 "Coded speech—including dog whistles, or speech designed to elicit a specific reaction from certain listeners—may escape the requisite scrutiny under Brandenburg by giving the speaker plausible deniability."
Recommendations
- Intent Documentation: Maintain clear records of training choices
- Output Monitoring: Implement systems to detect potential incitement content at scale
- User Context: Consider whether user prompts create liability exposure
- Scale Awareness: Recognize that low per-output risk may create aggregate liability
Brandenburg Summary
| Prong | Traditional Application | AI Challenge |
|---|---|---|
| Intent | Speaker directs toward harm | AI lacks intent; imputation unclear |
| Imminence | Temporal proximity required | AI timing unpredictable |
| Likelihood | Case-by-case assessment | Scale may increase aggregate likelihood |
Conclusion
The current state:
- Pro-protection scholars emphasize listener rights
- Anti-protection scholars argue no human communicates through AI
- Constructionist scholars urge choosing frameworks based on values
- Barrett questioned whether AI makes "inherently expressive choices"
- Conway applied Barrett and found Character A.I. lacks such choice
- Brandenburg's three-prong test may create structural immunity for AI incitement
For Steven: Conway represents the first significant judicial data point. Its application of Barrett's framework will likely influence subsequent litigation. However, the holding is narrow, leaving room for different outcomes. The Brandenburg analysis reveals additional complexity: even if AI output were deemed "speech," the incitement and true threats doctrines may not effectively regulate harmful AI-generated content due to the impossibility of proving AI intent.