GenAIs and the Safety Dance

The reference is a song by Men Without Hats and couldn’t resist given the soft shoe reaction given by the OpenAI co-founders following the resignation of its safety oversight team and talk about everyone taking a chance…
“In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power,” Wired reported.
All good – or at least a start, especially in light of all of the doomsday warnings reported on the potential dangers of unbridled, unchecked AI. A fifth of the company’s computing power as well as a crack team of researchers were being devoted to that danger. Sure, and how long did OpenAI’s not-for-profit status last?
Now, some ten month later, it seems that the entire Superalignment team has either resigned or transferred to another department.
Sutskever’s departure from the company might have come as no surprise, given that he participated in the failed coup against OpenAI co-founder Sam Altman in November (Altman was restored to the position some five days later and then-director Sutskever ‘left’ the board).
Hours later, “Jan Leike, the former DeepMind researcher who was the superalignment team’s other colead, posted on X that he had resigned.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he tweeted. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity… But over the past years, safety culture and processes have taken a backseat to shiny products.”
It was time for damage control – or soft shoe – and Altman and (OpenAI co-founder Greg) Brockman said “OpenAI has established the foundations for safely deploying AI systems more capable than GPT-4,” Business Insider via yahoo!tech reported.
“As we build in this direction, we’re not sure yet when we’ll reach our safety bar for releases, and it’s ok if that pushes out release timelines,” Brockman wrote.
“Brockman and Altman added in their post that the best way to anticipate threats is through a “very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities,” as well as collaborating with “governments and many stakeholders on safety.”
How does one have world-class security when one disanded the team charged with addressing just that?
Speaking of thing about OpenAI that should concern you, as Vox reported, ChatGPT can talk, but OpenAI employees sure can’t. “Questions arose immediately (regarding the Superalignment team co-leaders’ departure): Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.
“It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.”
In perpetuity? Really, Gracie?
Cause for concern #2: there’s no doubt AI needs an oversight committee and luckily for the world, OpenAI’s Super Sam himself is literally on the job, as we reported just a few weeks ago (Big Wins for Big Tech) and as Engadget said, OpenAI’s Sam Altman and other tech leaders join the federal AI safety board It’s like turkeys being appointed to the Christmas oversight board.
We came across an article a while back with Fast Company postulating the ‘Dead Internet Theory.’
“There’s been a popular theory floating around conspiracy circles for about seven or eight years now. It’s called the “Dead Internet” theory, and its main argument is that the organic, human-created content that powered the early web in the 1990s and 2000s has been usurped by artificially created content, which now dominates what people see online. Hence, the internet is “dead” because the content most of us consume is no longer created by living beings (humans).
“But there’s another component to the theory—and this is where the conspiracy part comes into play. The Dead Internet theory states that this move from human-created content to artificially generated content was purposeful, spearheaded by governments and corporations in order to exploit control over the public’s perception.
“Lately, the Dead Internet theory is starting to look less conspiracy and more prophetic—well, at least in part… As The New York Times reports, Meta’s Instagram has begun testing a program that would allow its most popular influencers to turn themselves into AI-powered chatbots so they can engage with users without, you know, actually having to engage with users themselves.”
Which gives new meaning to the term ‘phoning it in.’
We hate to project out the worse case scenario, but between all of the more or less unmonitored AIs out there in the wild, the lack of cybersecurity practiced by we’d say at least 90% of startups – and big players are no better, given the multiple hacks Facebook/Meta et al have experienced,- as well as the proliferation of deep fakes, both visual and auditory, and with the guardrails now gone from OpenAI, whenever we see the latest and greatest being launched and touted far and wide, our question is not so much does it have legs, but does it have arms.
Onward and forward.