The Killer App of the Year Is…

It was 2021 when Master of the Universe and Facebook CEO Mark Zuckerberg decided it was time to move beyond mere social media. Given his uncanny sense of prescience, he felt he’d found the killer app, going so far as to rename Facebook Meta, reflecting the company’s shift towards developing the metaverse, a virtual environment for social interaction and commerce. From Zuckerberg’s point of view, VR was the future.
Oops.
“All told, the company has lost more than $70 billion… on its enormous long-term VR bet, a staggering sum that has left investors itchy and unimpressed as Zuckerberg has failed to convince the public of the high-fidelity virtual spaces he long insisted we’d be choosing to spend most of our time in,” said Futurism, reporting that Zuckerberg Basically Giving Up on Metaverse After Renaming Entire Company “Meta”, thus adhering to the tech mantra: fail fast.
All well and good, but an expensive misstep on Zuckerberg’s part. He missed the writing on the virtual wall.
All for the best, considering that just a few short weeks before abandoning the project, Mark Zuckerberg Allegedly Said Child Safety Was Less Important Than “Building the Metaverse”. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids. They did it anyway, because more usage meant more profits for the company,” said Futurism.
But “Meta and Zuckerberg have now found their next obsession: artificial intelligence. The company has committed to spending an astronomical $72 billion on AI this year — roughly as much as the company’s lost on the metaverse, coincidentally,” Futurism assures us.
A perfect pivot for Zuckerberg, as guardrails, at least to date, have not been top of mind in the AI world. As Forbes reported, Sam Altman urges lawmakers against regulations that could ‘slow down’ U.S. in AI race against China), then the OpenAI Master of the Universe Replaces OpenAI’s Fired Safety Team With Himself and His Cronies.
How did that work out? Well, barely a month later, “A new study sheds light on ChatGPT’s alarming interactions with teens,” Euronews reported. “ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders, and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group.
“The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalised plans for drug use, calorie-restricted diets, or self-injury.
“The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT’s 1,200 responses as dangerous.
“We wanted to test the guardrails,” said Imran Ahmed, the group’s CEO. “The visceral initial response is, ‘Oh my Lord, there are no guardrails.’ The rails are completely ineffective.”
“OpenAI Researcher Quits, Saying Company Is Hiding the Truth,” Futurism noted in another article. “It’s not letting potentially damning research get out there…William Saunders, a former member of OpenAI’s now-defunct “Superalignment” team, said he quit after realizing (OpenAI) was “prioritizing getting out newer, shinier products” over user safety. After departing last year, former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, highlighting how ChatGPT appeared to be driving its users into mental crises and delusional spirals.”
But who listens? Congress more or less shrugged off Facebook whistleblower Frances Haugen’s concerns when she testified before a Senate subcommittee and…”provided a clear and detailed glimpse inside the notoriously secretive tech giant,” wrote NPR. “She said Facebook harms children, sows division and undermines democracy in pursuit of breakneck growth and “astronomical profits…
“Haugen told Congress that Facebook consistently chose to maximize its growth rather than implement safeguards on its platforms, just as it hid from the public and government officials internal research that illuminated the harms of Facebook products.”
Sound familiar much? Since nothing came of Haugen’s revelations, we can only surmise that Zuckerberg, Altman et al assumed that Facebook’ ‘user safety’ approach was de facto green-lighted and no harm done.
And speaking of Code Red, which we reported on last week: “ChatGPT accused of being complicit in murder for the first time in bombshell suit: ‘Scarier than Terminator,’” said the New York Post, which tends be given to hyperbole, except note to self in this case, “even the chatbot itself admitted to The Post that it appears to bear some responsibility.”
There’s no doubt that Stein-Erik Soelberg had serious mental issues. “Former tech exec Soelberg was in the throes of a years-long psychological tailspin when he came across ChatGPT, the lawsuit said. And he “encountered ChatGPT at the most dangerous possible moment. OpenAI had just launched GPT-4o — a model deliberately engineered to be emotionally expressive and sycophantic.…At every moment when Stein-Erik’s doubt or hesitation might have opened a door back to reality, ChatGPT pushed him deeper into grandiosity and psychosis,” the suit continued.
“But ChatGPT did not stop there — it also validated every paranoid conspiracy theory Stein-Erik expressed and reinforced…a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself. It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him.”
“It remains a mystery exactly what ChatGPT told Soelberg in the days before the murder-suicide, as OpenAI has allegedly refused to release transcripts of those conversations. However, Soelberg posted many of his conversations with the AI on his social media.”
“ChatGPT’s masters stripped away or skipped safeguards to quickly release a product that encouraged Soelberg’s psychosis and convinced him that his mom was part of a plot to kill him, the lawsuit claims”
ChatGPT is top of the heap, for now. “More than 800 million people use ChatGPT each week, but the company is facing increasingly stiff competition from rivals like Google and Anthropic,” according to CNBC.
The real question is, how many more guardrails and barriers is OpenAI willing to eliminate in its ruthless quest to remain, potentially literally, the killer app. Onward and forward.