Skip to main content

AI-generated text isn’t free speech - yet

I read the Judge’s Order, so you don’t have to.


Another 49 pages were added to the heart-wrenching tragedy of Sewell Garcia in May, as Judge Conway ruled against Character A.I. and Google on a number of motions argued in the case brought against them by Sewell’s mother.


Of these, 3 are profound in their impact on AI ethics.



  1. Google knows that AI has the potential to harm.


When the Google team who later formed Character A.I. asked to release a version of their LLM designed for text dialogues (LaMDA), Google denied their request.  Notably because:


Google employees raised concerns that users might “ascribe too much meaning to the text [output by LLMs], because ‘humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said.’


This is vital, because it acknowledges that human users instinctively anthropomorphise LLM text and in doing so they assume that accountability comes along for the rideUser assumption carries trust with it - if someone is accountable, the user assumes, then the system they built can be trusted.


The disconnect of tech authors at one end (who may think they are just creating uncaring machines which generate text) and users at the other end (who think that the text inherently carries the moral valence of an author) creates a gap in understanding which has demonstrated the potential to generate the most extreme harms.


Importantly, these harms fall only on the users, a long-neglected participant group in the tech-bro version of software development and a worrying parallel to the “negative externalities” and “privatised gains, socialised harms” of other predatory industries in the physical world.


  1. AI-generated text isn’t free speech


It was an audacious argument: “all Plaintiff’s claims are categorically barred by the

First Amendment because Character A.I. is speech which .. users have a right to receive


In a case dealing with teen tragedy of the highest order, it takes a certain legal mind to present this defence.


The response?


Defendants fail to articulate why words strung together by an LLM are speech


This mike-drop moment from the Judge drives right to the heart of AI.


Some may be surprised to hear that other digitally-generated text, in video games for example, has been ruled as free speech.  Google and Character A.I. claimed LLM-generated text should be treated similarly.


However, the judge demonstrated incredible insight in her comments that “Defendants miss the operative question”.  It is not “whether” this text is similar to other mediums, but “how”.


Judge Conway made the very astute point that the text must “communicate ideas” - it must be “expressive, such that it is speech”.  That is to say: Game NPCs are expressing the ideas of the human authors of their dialogue in a computer game, but token-generating algorithms are not.


Imagine those thousands of monkeys with their typewriters were hard at work in the 1500s.  If they happened to produce the exact text of Shakespeare’s Hamlet at the same moment the Bard completed his work

You could pick up each volume in turn, identical in content, and one would be a worthless collection of random letters, while the other would be a sublime work of creative thought.


  1. The humans who create AI are accountable for the risks it poses.


As a warning to AI creators everywhere, accountability was clearly addressed in this order.


Judge Conway quoted a number of precedents:


a legal duty will arise whenever a human endeavor creates a generalized and foreseeable risk of harming others


releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control


Character A.I. had a responsibility to “either to lessen the risk or see that sufficient precautions are taken to protect others from the harm that the risk poses.


The dissolving of human accountability in the solvent of digital currents is a core issue, often missed, in discussions of AI and digital ethics.


It is refreshing to see a legal ruling which is putting digital and in particular A.I. creators on notice - you cannot hide behind the algorithm,  you cannot disavow the impacts your product has.


There is a particular place for “duty” in ethics, underpinning as it does the Deontological sense that actions themselves can be “right” or “wrong”, regardless of the consequences.


Let us encourage more AI creators to do the “right” thing.

 

Comments

Popular posts from this blog

If we should

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

The callousing of our callow youth

At the 2024 Democratic National Convention, MyPillow CEO Mike Lindell delivered a ray of hope. Losing an argument to a 12 year old is sort of on-brand for Mike, given his non-consensual relationship with reality and enthusiastic disregard for personal credibility. Mike mainlines social media election misinformation like Neo learns kungfu and he was caught on camera aggressively shouting a transcript of his twitter feed into the face of a child . * That social media actively floods our modern attention, discourse and culture with the most antagonistic, inflammatory and misleading content is, of course, widely known. As Stephen Fry recent put it , Facebook and Twitter … “are the worst polluters in human history.  Worse than any chemical plant ever.  You and your children cannot breathe the air or swim in the waters of our culture without breathing in the toxic particulates and stinking effluvia that belch and pour unchecked from their companies into the currents of our world” * ...

Digital derangement

Last week, Stephen Fry called Zuckerberg and Musk the "worst polluters in human history", but he wasn't talking about the environment. The self-professed technophile, who once so joyfully quarried the glittering bounty of social media, has turned canary, warning directly and urgently of the stench he now detects in the bowels of Earth's digital coal mine: A reek of digital effluvia in the "air and waters" of our global culture. The long arc of Fry's journey from advocate to alarmist is important.  The flash-bang of today's "AI ethics" panic has distracted our moral attention from the duller truth that malignant "ethical misalignment" has already metastascised into every major organ of our digital lives. * In 2015, Google's corporate restructure quietly replaced the famous "Don't be evil" motto with a saccharine, MBA-approved fascimile.  It seems the motto was becoming a millstone, as it allowed critics to attack...