All the News that’s Fit to Spin

The tech community was taken by storm last week by DeepSeek, which in our mind, we refer to as DeepSix, which seems much more accurate, all things considered: all the news that we see as fit to ‘print’ and bugger all to the rest. The technology has the same problems that all LLMs have: it scrapes/is trained on the information readily available on the web. And picks and chooses what you can see.
From a technical point of view: “What’s clever about what DeepSeek has done is that they’ve figured out a way to squeeze out more performance from Nvidia’s chips by going a level deeper and tinkering with how the chips work. In short, this is better engineering, and it has allowed them to overcome the constraints imposed on them due to US chip controls. In doing so, they have shown the world a new approach to building AI models much more cheaply,” wrote Om Malik in Crazy Stupid Tech. “I think the hysteria is hugely overblown…If you read the original paper, two things are clear: DeepSeek has done something clever that will help lower the cost of the AI revolution for everyone, and they’ve shared how they’ve done it.”
Then there’s this: “OpenAI Furious – DeepSeek Might Have Stolen All the Data OpenAI Stole From Us,” 404 Media reported. “OpenAI shocked that an AI company would train on someone else’s data without permission or compensation.”
And OpenAI has how many lawsuits pending due to this very issue? “Millions of articles from The New York Times were used to train chatbots that now compete with it, the (New York Times) lawsuit said,” the Times itself reported.
The disregard for copyrights is an issue with the LLMs of any stripe or country of origin, and need we remind you that we’re looking at a total Tik Tok ban due to its links to China/the CCP – which is also DeepSeek’s country of origin. Nothing to see here?
“We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan), wrote The Guardian And if you think it’s just the CCP who poses a danger, META Fesses Up to Leading Censorship Cartel “due to pressure from (the last US administration).”
Ask Sage, a company focused on providing generative AI to the government, recently added “Chinese-developed DeepSeek R1, but includes a disclaimer,” 404 Media reported. ““WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party],” it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being “compliant” with or “capable” of handling sensitive data.”
So, what’s it doing there? Do all government workers read fine print?
Our problem with many if not all of the generative AIs is the information they’re scraping, primarily from news organizations, many of which use NewsGuard as an ‘unbiased’ third-party fact checker. “DeepSeek’s chatbot achieves 17% accuracy, trails Western rivals in NewsGuard audit, Reuters reported. “The chatbot repeated false claims 30% of the time and gave vague or not useful answers 53% of the time in response to news-related prompts, resulting in an 83% fail rate, according to a report published by trustworthiness rating service NewsGuard.”
‘Trustworthy?’
“New Thought Police ‘NewsGuard’ Is Owned by Big Pharma,” said Organic Consumers reported. Truth be told, there’s a lot of hidden corporate money behind NewsGuard, so again, the devil is (hidden) in the details.
While generative AIs might be a useful tool, again, it’s important to keep in mind that it’s a tool – the latest iteration of a pencil, save that it doesn’t come with an eraser on the end to correct its mistakes. Or hallucinations.
Lest we forget that in his farewell address, President Eisenhower’s warning re that while “holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
And there you have it. In case you missed it, China Just Tried to Overthrow a Major Western Nation’s Government. “Disguising themselves as Safeguard Defenders and allegedly based in Spain‘s capital of Madrid, the Chinese agents were active from November through January on the Facebook, X (formerly Twitter), BlueSky, and TikTok social media platforms. The social media accounts—comprised of Chinese operatives posing as Westerners—posted and amplified content critical of both Spain’s central government in Madrid and of Carlos Mazon, Valencia’s regional governor.
Politico also reported on the Spamouflage operation.
Generative AIs et al are very much capable of being weaponized and have the potential to manipulate – or conveniently forget – events and facts, as is the case with Tiananmen Square – as well as to manipulate hearts and minds. There’s no doubt that there’s an AI arms race going on, especially considering that generative AIs are potentially governments’ latest tool in their arsenals of weapons of mass deception. And lest you underestimate its potential power, need we remind you especially in this day and age, that the pen is mightier than the sword. Onward and forward.