I read the Judge’s Order, so you don’t have to.
Another 49 pages were added to the heart-wrenching tragedy of Sewell Garcia in May, as Judge Conway ruled against Character A.I. and Google on a number of motions argued in the case brought against them by Sewell’s mother.
Of these, 3 are profound in their impact on AI ethics.
Google knows that AI has the potential to harm.
When the Google team who later formed Character A.I. asked to release a version of their LLM designed for text dialogues (LaMDA), Google denied their request. Notably because:
“Google employees raised concerns that users might “ascribe too much meaning to the text [output by LLMs], because ‘humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said.’“
This is vital, because it acknowledges that human users instinctively anthropomorphise LLM text and in doing so they assume that accountability comes along for the ride. User assumption carries trust with it - if someone is accountable, the user assumes, then the system they built can be trusted.
The disconnect of tech authors at one end (who may think they are just creating uncaring machines which generate text) and users at the other end (who think that the text inherently carries the moral valence of an author) creates a gap in understanding which has demonstrated the potential to generate the most extreme harms.
Importantly, these harms fall only on the users, a long-neglected participant group in the tech-bro version of software development and a worrying parallel to the “negative externalities” and “privatised gains, socialised harms” of other predatory industries in the physical world.
AI-generated text isn’t free speech
It was an audacious argument: “all Plaintiff’s claims are categorically barred by the
First Amendment because Character A.I. is speech which .. users have a right to receive”
In a case dealing with teen tragedy of the highest order, it takes a certain legal mind to present this defence.
The response?
“Defendants fail to articulate why words strung together by an LLM are speech”
This mike-drop moment from the Judge drives right to the heart of AI.
Some may be surprised to hear that other digitally-generated text, in video games for example, has been ruled as free speech. Google and Character A.I. claimed LLM-generated text should be treated similarly.
However, the judge demonstrated incredible insight in her comments that “Defendants miss the operative question”. It is not “whether” this text is similar to other mediums, but “how”.
Judge Conway made the very astute point that the text must “communicate ideas” - it must be “expressive, such that it is speech”. That is to say: Game NPCs are expressing the ideas of the human authors of their dialogue in a computer game, but token-generating algorithms are not.
Imagine those thousands of monkeys with their typewriters were hard at work in the 1500s. If they happened to produce the exact text of Shakespeare’s Hamlet at the same moment the Bard completed his work
You could pick up each volume in turn, identical in content, and one would be a worthless collection of random letters, while the other would be a sublime work of creative thought.
The humans who create AI are accountable for the risks it poses.
As a warning to AI creators everywhere, accountability was clearly addressed in this order.
Judge Conway quoted a number of precedents:
“a legal duty will arise whenever a human endeavor creates a generalized and foreseeable risk of harming others”
“releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control”
Character A.I. had a responsibility to “either to lessen the risk or see that sufficient precautions are taken to protect others from the harm that the risk poses.”
The dissolving of human accountability in the solvent of digital currents is a core issue, often missed, in discussions of AI and digital ethics.
It is refreshing to see a legal ruling which is putting digital and in particular A.I. creators on notice - you cannot hide behind the algorithm, you cannot disavow the impacts your product has.
There is a particular place for “duty” in ethics, underpinning as it does the Deontological sense that actions themselves can be “right” or “wrong”, regardless of the consequences.
Let us encourage more AI creators to do the “right” thing.
Comments
Post a Comment