Talk About a Killer App…

Talk About a Killer App…

Image by Clker-Free-Vector-Images from Pixabay

The tech sector has a bad habit of releasing tech before it has been tested over time, sending it out into the wild, no matter the potential harm it may do. This is a warning we saw in the Age of Social, but tech is always about pushing the envelope, no matter that someone’s standing there in ready with a lit match. Case in point: Facebook contended that it was there to bring the world closer together. Remember Facebook’s Terrible, Horrible, Very Bad Day when whistleblower Frances Haugen went public about the platform’s manipulations and the damage it was doing to young people? To this day, the problems have not been eliminated.

And you do have to wonder how dangerous a platform truly is when it’s the whistleblower himself who is eliminated.

“A former researcher at OpenAI has come out against the company’s business model, writing, in a personal blog, that he believes the company is not complying with U.S. copyright law. That makes him one of a growing chorus of voices that sees the tech giant’s data-hoovering business as based on shaky (if not plainly illegitimate) legal ground…OpenAI is currently being sued by a broad variety of celebrities, artists, authors, and coders, all of whom claim to have had their work ripped off by the company’s data-hoovering algorithms. Other well-known folks/organizations who have sued OpenAI include Sarah Silverman, Ta-Nahisi Coates, George R. R. Martin, Jonathan Franzen, John Grisham, the Center for Investigative Reporting, The Intercept, a variety of newspapers (including The Denver Post and the Chicago Tribune), and a variety of YouTubers, among others,” Gizmodo reported.

Now, three months later, that same researcher, “OpenAI whistleblower (Suchir Balaji) found dead in San Francisco apartment,” according to the Mercury News et many als. “The death was ruled a suicide police officials said. There is “currently, no evidence of foul play.” Yet no precise cause of death was ever given, which is suspicious in and of itself.

“In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.”

“On Nov. 18, The Times filed a letter in federal court that named Balaji as a person with “unique and relevant documents” that they would use in their current litigation against OpenAI,” The Mirror reported. “Balaji held vital information for a lawsuit against the company.”

Oddly, “On November 25, the day before his body was found, a court filing named Balaji in a copyright lawsuit brought against OpenAI,” the Hindustan Times reported.

It’s not all that difficult to make a death look like a suicide – just ask ChatGPT, which we did not and won’t. But whether it was murder or suicide (perhaps due to researcher’s remorse?), you do have to wonder about the precise inner workings of a platform – and what Balaji had yet to reveal – that makes it worth killing for – or dying for.

Speaking of whistleblowers’ sudden death by ‘suicide,’ Boeing Whistleblower Warned Family Friend ‘It’s Not Suicide’ Before Death. ‘Suicide’ among whistleblowers is far from being an outlier.

How dangerous are As in general? Note that “A chatbot hinted a kid should kill his parents over screen time limits,” said NPR, while TechCrunch reported that “ElevenLabs’ AI voice generation ‘very likely’ used in a Russian influence operation.”

And do keep in mind that “ChatGPT Is Powered by Human Contractors Getting Paid $15 Per Hour. The well known chatbot is automated, but that automation is guided by low-paid human workers labelling data,” as Gizmodo reported a while back. “You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT. You have nothing,” former GPT Project Linguist Alexej Savreux, told NBC.”

With AI now being used in clinic trials, medicine discovery and diagnostics, do you think it’s high level or lower-paid researchers or even interns – or worse – who are labeling the data?

Google CEO Sundar Pichai warned us last year and “admits people ‘don’t fully understand’ how chatbot AI works,” said Yahoo!News, while MIT sounded the alarm long ago, revealing The Dark Secret at the Heart of AI “No one really knows how the most advanced algorithms do what they do. That could be a problem.”

Ya think??? This just in: OpenAI’s new ChatGPT o1 model will try to escape if it thinks it’ll be shut down — then lies about it.

We wrote an end of the year piece or two a few years back, recalling what Peter Thiel had so famously said about being promised flying cars and instead we got 140 characters, and we did name 140 characters who drove tech and kudos to that early wave of innovators

According to Balaji, “AI companies are “destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.”

Beware this latest iteration of tech and the characters behind it and keep in mind that there’s a world of difference between disruption and destruction, and that there’s a world of difference between those early 140 characters and the current crop who aren’t what we’d call characters but rather, from what we’ve seen so far, bad actors. Onward and forward.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.