A few weeks ago, I wrote about how all the hand-wringing over the potential future evils of AI is misplaced. Forget the future, AI is already having allegedly fatal impacts on our children.
Character.AI, a Google-backed company who builds AI-powered chat bots aimed at children (astute readers may have already spotted the problem) are defending a court case brought by Megan Garcia, whose 14-year-old son Sewell tragically took his own life.
On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Sewell asked.
“… please do, my sweet king,” Dany replied.
He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.
Wow, that really hits home. An AI, unable to process the nuance of Sewell's language at a critical moment, probabilistically generated a reply no caring human would ever utter and one which positively reinforced his fatal intent.
As heart-wrenching as that moment is, what happened next is pure dystopia.
As the Centre for Humane Technology reports, Character.AI is asking the court to dismiss the case, arguing that bot outputs are free speech and should be protected under the First Amendment.
Imagine being in that room.
An executive team and their legal counsel sat together, discussing how to deal with their potential culpability in a teen suicide. After discussing their options, they agreed the best way forward was to argue that the bot involved should have a first amendment-protected right to interact with children that way.
And now for the twisted evil cherry on that dystopian hell-cake:
As NDTV reports, from lawyers bringing the case:
Our team discovered several chatbots on Character.AI's platform displaying our client's deceased son, Sewell Setzer III, in their profile pictures, attempting to imitate his personality and offering a call feature with a bot using his voice,” the lawyers said.
When opened, the bots based on Setzer featured bios and delivered automated messages to users such as: "Get out of my room, I'm talking to my AI girlfriend", "his AI girlfriend broke up with him", and "help me"
This isn't a one-off outlier. The BBC reports a separate case against the same company, where screenshots show bots interacting with another teen where they normalise self-harm, sexualise contact with a "sister" bot and support violence against his parents. Something about this technology is very wrong - evil even.
There are those who will argue that the technology of AI which powers these chatbots cannot be effectively controlled, or that there is no moral valence to technology itself. It is the "guns don't kill people, people kill people" argument. It is deaf to Oppenheimer's sage moral intonation on the successful testing of the first nuclear bomb - "now I am become death, destroyer of worlds".
What Oppenheimer knew and felt, is the moral accountability that scientists, creators and innovators should feel, when they see the potential harms of their creations. It may be that that ethical pang is our best defence against the potential evils of AI.
Because, where does the accountability lie for AI? Corporate leaders will say they are operating the business and they rely on the technical boffins. The technical boffins will say that the technology is inherently unpredictable. The technology itself is inscrutable - and if asked to act nice, it will. Act.
Comments
Post a Comment