Skip to main content

If we should

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

Comments

Popular posts from this blog

Trading factchecks for fat cheques

Spinach is full of iron.  We only use 10% of our brain power.  Man never landed on the moon.  Vaccines cause autism.  They’re eating the cats. You have an influencer friend, Fred.  He tells you that he has discovered that he can reach more people and make more money if he just stops checking whether things are true before he shares them with his audience.  What would you think of Fred? Is it morally wrong if he doesn’t create the misinformation himself, but just passes it along to those who have chosen to listen to him?  Haven’t we all been guilty of repeating common misconceptions at some point?  Can we hold one person morally accountable for repeating reports of pet consumption in Springfield, but give another a pass for inflicting spinach on their children at every meal? As Gina Rushton reports , Meta has now taken a position on this ethical dilemma.  Where in 2021 it celebrated “industry leading” fact-checking, it recently announced the ...

The callousing of our callow youth

At the 2024 Democratic National Convention, MyPillow CEO Mike Lindell delivered a ray of hope. Losing an argument to a 12 year old is sort of on-brand for Mike, given his non-consensual relationship with reality and enthusiastic disregard for personal credibility. Mike mainlines social media election misinformation like Neo learns kungfu and he was caught on camera aggressively shouting a transcript of his twitter feed into the face of a child . * That social media actively floods our modern attention, discourse and culture with the most antagonistic, inflammatory and misleading content is, of course, widely known. As Stephen Fry recent put it , Facebook and Twitter … “are the worst polluters in human history.  Worse than any chemical plant ever.  You and your children cannot breathe the air or swim in the waters of our culture without breathing in the toxic particulates and stinking effluvia that belch and pour unchecked from their companies into the currents of our world” * ...

The first AImendment

A few weeks ago, I wrote about how all the hand-wringing over the potential future evils of AI is misplaced.  Forget the future, AI is already having allegedly fatal impacts on our children . Character.AI, a Google-backed company who builds AI-powered chat bots aimed at children (astute readers may have already spotted the problem) are defending a court case brought by Megan Garcia, whose 14-year-old son Sewell tragically took his own life. The New York Times reported on Sewell's final moments interacting with his Character.ai chatbot "Dany":  On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her. “Please come home to me as soon as possible, my love,” Dany replied. “What if I told you I could come home right now?” Sewell asked. “… please do, my sweet king,” Dany replied. He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger. Wow, that really hits ...