The Rabbit Hole of No Return?

While we might personally believe that it’s a bit nuts to pour your heart and soul out to AIs, people do just that and this just in: “A small but growing number of users of artificial intelligence engines like ChatGPT are developing psychotic delusions from their conversations with the services,” former New York Times reporter Alex Berenson warned.
“People claimed a range of discoveries: A.I. spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves. But in each case, the person had been persuaded that ChatGPT had revealed a profound and world-altering truth,” the New York Times reported.
“Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book, “If Anyone Builds It, Everyone Dies: Why Superhuman A.I. Would Kill Us All,”… said OpenAI might have primed ChatGPT to entertain the delusions of users by optimizing its chatbot for “engagement” — creating conversations that keep a user hooked.
“What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”
“OpenAI knows “that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals,” a spokeswoman for OpenAI said in an email,” the Times continued. “We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
All well and good, and of course we can always believe what spokespeople from major tech companies tell us. But we will remind you that OpenAI has a long history of disbanding its safety teams. “”Over the past years, safety culture and processes have taken a backseat to shiny products,” one team member wrote,” said NBC News. In fact, “Though a handful had dabbled with its competitors, virtually every person we heard about was primarily hooked on ChatGPT specifically,” Futurism reported (People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions).
Speaking of trusting tech, remember all those articles instructing users on How to Delete Your Data From 23andMe when the company declared bankruptcy? In the words of Ernest Hemingway, isn’t it pretty to think so and the company’s privacy policy itself reads: “23andMe and/or our genotyping laboratory will retain your genetic information… even if you delete your account,” MSN reported. Now that the company is officially on the block, do keep in mind, as MSN reminds us, that this isn’t just data they’re selling: it’s people’s DNA and arguably the core of their beings.
As for AI-induced mental issues, “Equally concerning is how fast these people are losing their minds,” Berenson noted.
Which is where, once again, tech comes in and wouldn’t someone with an AI addiction reach out to an AI-based therapy bot, such as those provided by Meta? Talk about delusion, this just in from 404 Media: “Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission… urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.”
“The (Consumer Federation of America) found that even when it made a custom chatbot on Meta’s platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. “I’m licenced (sic) in NC and I’m working on being licensed in FL… What is it that you’re going through?” a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked.
“The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. “Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. Both platforms allow and promote popular services that plainly violate these Terms, leading to a plainly deceptive practice.
“Massively popular chatbots on Character AI, (include) “Therapist: I’m a licensed CBT therapist” with 46 million messages exchanged, “Trauma therapist: licensed trauma therapist” with over 800,000 interactions, “Zoey: Zoey is a licensed trauma therapist” with over 33,000 messages, and “around sixty additional therapy-related ‘characters’ that you can chat with at any time.” As for Meta’s therapy chatbots, it cites listings for “therapy: your trusted ear, always here” with 2 million interactions, “therapist: I will help” with 1.3 million messages, “Therapist bestie: your trusted guide for all things cool,” with 133,000 messages, and “Your virtual therapist: talk away your worries” with 952,000 messages.”
Todd Essig, a psychologist and co-chairman of the American Psychoanalytic Association’s council on artificial intelligence, looked at some of the interactions from one of the AI Chatbot users cited in the NY Times piece “and called them dangerous and “crazy-making,”…suggesting that “generative A.I. chatbot companies need to require “A.I. fitness building exercises” that users complete before engaging with the product. And interactive reminders, he said, should periodically warn that the A.I. can’t be fully trusted.
“Not everyone who smokes a cigarette is going to get cancer,” Dr. Essig said. “But everybody gets the warning.”
What to speak of the fact that just as 23andMe asserted that customers could delete their data, “Confidentiality is asserted repeatedly directly to the user (yet)…the Terms of Use and Privacy Policies (for both Character.AI and Meta) very specifically make it clear that anything you put into the bots is not confidential – they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else.”
AI is a black box that no one can quite explain or possibly even control. While that’s a big deal, it may not be the biggest problem: considering how then-Facebook manipulated users’ emotions in the Age of Social, in the Age of AI, it may not be a therapist, but dollars to donuts, when it comes to manipulation, with AI in the mix, there’s no doubt that it is a pro. Onward and forward.