Is AI Chatbot Output Protected Speech Under the First Amendment?

TRUE GOLD Synthesis | Steven Weisburd Research Portal

Evidence Tier: TRUE GOLD Sources: 11 Verified Pins: 7,976

Executive Summary

The question of whether AI chatbot output qualifies as protected speech under the First Amendment remains unresolved but is now being actively litigated. The scholarly debate presents three distinct positions, while the first significant judicial treatment—Judge Conway's May 2025 order in Garcia v. Character Technologies—declined to hold that AI output constitutes speech, applying Justice Barrett's Moody v. NetChoice concurrence framework.1

Key Finding: Conway's order represents a potential inflection point. While scholars debate theoretical frameworks, Conway applied Barrett's "human expressive choice" test and found Character.AI's output "more akin to" output lacking such choice—suggesting LLM output may fall outside First Amendment protection.

Part I: The Scholarly Debate

A. Pro-Protection Position (Volokh, Lemley & Henderson)

Core Thesis: AI output should receive First Amendment protection, primarily through listener rights rather than speaker rights.2

Three Arguments for Protection:

  1. Creator Rights (Weakest): AI creators make expressive choices through training data selection, fine-tuning, and alignment.
    📄 "Humans are less directly involved in communicating via foundation models, which are generating text that the humans have not necessarily written or even seen."
  2. Listener Rights (Strongest): "The First Amendment protects 'speech' and not just speakers."2 Courts have long recognized rights "to hear" and "to receive information and ideas."

    Key Precedent: Kleindienst v. Mandel, Stanley v. Georgia, Lamont v. Postmaster Gen.

  3. User Rights to Speak: AI programs are "tools for creating user speech, and therefore as protected by the First Amendment as are cameras and voice and video recorders."2

B. Anti-Protection Position (Salib)

Core Thesis: AI outputs are NOT protected speech because no one with First Amendment rights communicates when AI generates output.3

📄 "When a generative AI system—like ChatGPT—outputs some text, image, or sound, no one thereby communicates."

Why Not Creator Speech? AI creators are not trying to make software that says anything in particular. Nearly all outputs will be on topics creators know nothing about.

Why Not User Speech? ChatGPT's outputs are not copies of a user's message. Users prompt LLMs for AIs to output something new.

C. Constructionist Position (Kaminski & Jones)

Core Thesis: The question "Does AI speech disrupt the law?" is the wrong question. Law and technology co-construct each other.4

Key Insight: "No one feature of speech-generating AI systems is driving 'disruption' of the legal system."


Part II: The Barrett Framework

Justice Barrett's Moody v. NetChoice Concurrence (July 1, 2024)

The Expressive Choice Test:5 When a platform creates an algorithm to remove posts supporting a particular position, "the algorithm simply implements human beings' inherently expressive choice."

📄 "What if a platform's owners hand the reins to an AI tool and ask it simply to remove 'hateful' content? If the AI relies on large language models to determine what is 'hateful' and should be removed, has a human being with First Amendment rights made an inherently expressive choice?"

The Attenuation Concern: "Technology may attenuate the connection between content-moderation actions and human beings' constitutionally protected right to 'decide for [themselves] the ideas and beliefs deserving of expression.'"


Part III: The Conway Order—First Judicial Application

Garcia v. Character Technologies, Inc.

M.D. Fla., Case No. 6:24-cv-1903-ACC-UAM, Order on Motion to Dismiss, May 21, 20256

The Holding: "The Court is not prepared to hold that Character A.I.'s output is speech."

Conway's Reasoning

1. Application of Barrett Framework: Conway found Barrett's Moody concurrence "instructive" and concluded: "Character A.I.'s output appears more akin to the latter"—meaning output lacking human expressive choice.

2. Expressive Conduct Analysis: "The purpose of the expressive conduct test is to determine whether conduct is sufficiently similar to speech so as to warrant First Amendment protections."7

3. Third-Party Standing: Despite refusing to call AI output "speech," Conway held Character Technologies CAN assert First Amendment rights of users.

4. Product vs. Service: Character A.I. IS a product for product liability purposes.


Part IV: Addressing Research Concerns

Concern 1: Did Conway Reason Backwards?

Conway's reasoning follows Barrett's framework rather than starting from predetermined conclusion.

Concern 2: Downstream Interpretation Risk

Mitigating: Conway's holding is narrow ("not prepared to hold" not "never speech"). Amplifying: Conway's application of Barrett may become template.

Concern 3: Foreseeability-Liability Connection

From Conway: "Defendants, by releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control."

Concern 4: Barrett Concurrence as Interpretive Key

Confirmed: Barrett's framework proved determinative in Conway's analysis.

Concern 5: Brandenburg Test Application

Status: ✔ Addressed in Part VI below.


Part V: Open Questions

  1. What if AI implements specific human editorial choices?
  2. Does listener rights theory provide alternative path to protection?
  3. How does product/speech distinction apply to other AI systems?
  4. Degree of human involvement threshold for "expressive choice"?
  5. Foreign ownership implications per Barrett's concurrence?

Part VI: The Brandenburg Problem—Incitement and AI Speech

Added 2026-02-04 | Sources: Brandenburg warehouse (3,340 pins) | Evidence Tier: TRUE GOLD

A. Brandenburg Test Overview

The Supreme Court's 1969 decision in Brandenburg v. Ohio established the modern incitement standard. Under Brandenburg, speech loses First Amendment protection only when:8

  1. Intent: The speech is "directed to inciting" lawless action
  2. Imminence: The lawless action must be "imminent"
  3. Likelihood: The speech must be "likely to produce such action"
📄 "Brandenburg v. Ohio introduced a new paradigm for evaluating incitement by looking at whether the speech was intended to induce imminent lawless action by third parties and whether such action was likely to occur."

B. AI Application Challenges

The Intent Problem

Brandenburg requires that speech be "directed to inciting"—implying specific intent. AI systems present a fundamental challenge.9

📄 "We sometimes use ordinary English terms that generally connote intent, as we do when we say AI 'lies' or 'hallucinates.' But AI is not sentient."
Key Insight: The Brandenburg intent requirement may create a structural immunity for AI-generated incitement, unless courts either impute intent from developers/deployers, or modify the doctrine for non-human speakers.

The Imminence Problem

Brandenburg's imminence requirement assumes a temporal relationship between speech and action that AI complicates.8

📄 "Incitement is speech advocating for the audience to engage in imminent lawless conduct against a third party."
Key Insight: AI-generated content may be inherently non-imminent in the Brandenburg sense, as the algorithm cannot target timing of harmful action.

The Likelihood/Scale Problem

Perhaps uniquely, AI may increase the likelihood prong's applicability through scale.9

📄 "Large Language Models Can Be Used to Effectively Scale Spear Phishing Campaigns" and "when used by malicious actors, these models can also be used to generate and disseminate disinformation at scale."
Key Insight: Courts may need to consider whether "likelihood" should be assessed per-output or across the aggregate of AI-generated content.

C. True Threats Doctrine After Counterman

The 2023 Supreme Court decision in Counterman v. Colorado established that the First Amendment's "true threats" exception requires at least a recklessness standard.10

📄 "In Counterman v. Colorado, the Supreme Court made clear that when a person is criminally prosecuted for making online threats, the state must show that the defendant subjectively understood that they were putting other people in fear for their safety."

Counterman v. Colorado, 143 S. Ct. 2106 (2023) held that the "true threats" exception required a showing of at least recklessness.11

Application to AI: AI cannot "subjectively understand" anything. Recklessness requires awareness of risk—can this be imputed to developers? The Counterman standard may effectively immunize AI-generated threats unless courts impute mental state to human actors.

D. Liability Frameworks

📄 "It is not obvious that existing liability doctrines are currently capable of easily dealing with harmful speech from AI, nor are all designs for generative AI created equal in the immunity or liability analyses."

Section 230 Complexity

AI raises thorny questions about Section 230 immunity. Whether Section 230 immunizes AI companies depends on whether AI-generated content is treated as content "provided by another information content provider" (immune) or content created by the platform itself (not immune).

First Amendment Exceptions

Even within recognized exceptions, "the First Amendment and internal limits within tort law are likely to present barriers to liability for injury based on inaccurate information."

E. Implications for AI Developers

The Protection Gap

📄 "While these examples may be undesirable from a product or ethical perspective, some of these examples are not necessarily legally actionable, whether because of the First Amendment or tort liability rules."

Coded Speech and Plausible Deniability

📄 "Coded speech—including dog whistles, or speech designed to elicit a specific reaction from certain listeners—may escape the requisite scrutiny under Brandenburg by giving the speaker plausible deniability."

Recommendations

  1. Intent Documentation: Maintain clear records of training choices
  2. Output Monitoring: Implement systems to detect potential incitement content at scale
  3. User Context: Consider whether user prompts create liability exposure
  4. Scale Awareness: Recognize that low per-output risk may create aggregate liability

Brandenburg Summary

ProngTraditional ApplicationAI Challenge
IntentSpeaker directs toward harmAI lacks intent; imputation unclear
ImminenceTemporal proximity requiredAI timing unpredictable
LikelihoodCase-by-case assessmentScale may increase aggregate likelihood

Conclusion

The current state:

For Steven: Conway represents the first significant judicial data point. Its application of Barrett's framework will likely influence subsequent litigation. However, the holding is narrow, leaving room for different outcomes. The Brandenburg analysis reveals additional complexity: even if AI output were deemed "speech," the incitement and true threats doctrines may not effectively regulate harmful AI-generated content due to the impossibility of proving AI intent.

Footnotes

1. Garcia v. Character Techs., Inc., No. 6:24-cv-1903-ACC-UAM, slip op. (M.D. Fla. May 21, 2025); Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024) (Barrett, J., concurring).

2. Eugene Volokh, Mark A. Lemley & Peter Henderson, Freedom of Speech and AI Output, 3 J. Free Speech L. __ (2023). 📄 PDF

3. Peter Salib, Artificially Unintelligent Speech, 102 Wash. U. L. Rev. __ (forthcoming 2025). 📄 PDF

4. Margot E. Kaminski & Andrew Selbst Jones, AI and the Construction of Speech, 138 Harv. L. Rev. 1657 (2025). 📄 PDF

5. Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024) (Barrett, J., concurring). 📄 PDF

6. Garcia v. Character Techs., Inc., No. 6:24-cv-1903-ACC-UAM, slip op. (M.D. Fla. May 21, 2025). 📄 PDF (via FIRE)

7. Brief of FIRE as Amicus Curiae, Garcia v. Character Techs., Inc. (11th Cir. 2025). 📄 PDF

8. Note, Words Not Said: The Brandenburg Standard and Coded Speech, 65 B.C. L. Rev. 1483 (2024). 📄 PDF

9. Peter Henderson, Tatsunori Hashimoto & Mark A. Lemley, Generative AI Can Cause Real Harm, 3 J. Free Speech L. 587 (2023). 📄 PDF

10. Ian Ayres & Jack M. Balkin, The Law of Risky Agents, 92 U. Chi. L. Rev. __ (forthcoming 2025). 📄 PDF

11. Counterman v. Colorado, 143 S. Ct. 2106 (2023). 📄 PDF