Silicon Valley Goes to Washington…Why?

News stories have a way of quickly becoming yesterday’s news. Which for the most part they are, but what if there’s more to the story?
Ever wonder why the tech C suite has suddenly turned its attention to participating in the G qua Government suite (their armies of lobbyists aside, of course) – and backed the current administration? We were curious, and there’s a reason why you need to look outside of the mainstream media for answers or connect dots that might be a bit obscured.
“The Shocking Reason Marc Andreessen Had to Endorse Donald Trump,” the Independent Sentinel reported – and not reported in the mainstream media. According to Andreessen, said the Sentinel, “The Biden Administration planned to control AI and only allow three companies to create it. The administration would crush all competing companies and classify the physics needed to run AI models. “AI is a technology, basically, that the government is gonna completely control. — Don’t fund AI startups,” Andreessen was warned. “That’s not something that we’re gonna allow to happen…We’re gonna control them, um, and we’re gonna dictate what they do.”
You can watch the Andreessen interview here.
The Suchir Balaji ‘Suicide’
Speaking of yesterday’s news, may be time to take a closer look at the purported suicide of Suchir Balaji. To refresh your memory, Balaji “worked as an engineer for Sam Altman building AI, until he decided that Altman was committing crimes. Balaji became a whistleblower, and soon after, was found dead in his apartment. California authorities claim it was suicide. Crime scene photos clearly show a murder. Balaji’s mother, Poornima Ramarao, tells the most shocking story we’ve heard in a long time,” said Tucker Carlson (Tucker Carlson explores murder/suicide mystery at Open AI). The piece is long, but basically, according to Balaji’s mother, the crime scene evidence does not support the coroner’s conclusions. There were signs that an altercation had taken place, and blood splatter does not match the coroner’s findings. For those who are more interested in what she has to say about her son’s findings, start at the fifty minute mark.
Interestingly, Ilya Sutskever, OpenAI’s safety expert who was fired after attempting to oust Sam Altman, hired armed security guards to accompany him following Balaji’s death.
What was of particular concern to Balaji re OpenAi was that data in and data out were not the same: data accuracy was not maintained. Could be the reason why “OpenAI staffers who are supposed to make sure AI doesn’t go rogue are jumping ship fast,” Quartz reported, and note to self: OpenAI disbands safety team focused on risk of artificial intelligence causing ‘human extinction,’ said the New York Post
Which casts a different light on Balaji’s death, all things considered. And speaking of the tech sector in Washington, since he was a whistleblower, the federal government has the ability to do its own investigation – which was not heeded during the last administration – and we do hope that the new administration will look into it.
So just how insidious is OpenAI?
Here’s a must-read: How OpenAI’s bot crushed this seven-person company’s website ‘like a DDoS attack,’” Tech Crunch reported. “Triplegangers CEO Oleksandr Tomchuk was alerted that his company’s e-commerce site was down. It looked to be some kind of distributed denial-of-service attack. He soon discovered the culprit was a bot from OpenAI that was relentlessly attempting to scrape his entire, enormous site. OpenAI was sending “tens of thousands” of server requests trying to download all of it.
“Triplegangers… has spent over a decade assembling what it calls the largest database of “human digital doubles” on the web, meaning 3D image files scanned from actual human models… If a site isn’t properly using robot.txt, OpenAI and others take that to mean they can scrape to their hearts’ content. It’s not an opt-in system. Robot.txt also isn’t a failsafe. AI companies voluntarily comply with it. Another AI startup, Perplexity, pretty famously got called out last summer by a Wired investigation when some evidence implied Perplexity wasn’t honoring it.
Ten years’ worth of work, scraped in a day and thanks for playing, Tripleganger! Seems OpenAI could kill a business, too, with a work-around for standard security measures and they don’t care about your steenking TOS.
If you thought the Age of Social was bad. leaving us bereft of privacy, AI is potentially even more dangerous. And there are concerns out there: Will AI Destroy the Internet Sooner Than We Think? Airpressroom asked. “According to a 2024 study from MIT, AI-generated misinformation spreads 40% faster than human-written content, making it increasingly challenging for users to discern fact from fiction. Social media platforms are particularly vulnerable. The surge of AI-generated fake news, deepfake videos, and fraudulent accounts has made maintaining trust online more difficult. Experts warn that without effective interventions, the internet’s credibility could erode rapidly.”
Or worse and this just in: Major Breaking Bill Gates Scandal! Gates Foundation Dark Money Group, Arabella, Caught Secretly Running Fake Medical Petition With Over 17K Signatures Of Fake Doctors to stop RFK, Jr from being confirmed as Secretary of Health & Human Services.
AI is being incorporated into everything, everywhere, all at once and on steroids, and unless we’re vigilant, it’ll only escalate and well beyond attempting to divine the next brand of sneakers you’re most likely to purchase. Like AI itself, this is political interference on steroids and let’s not forget how embedded Gates is in pharma.
Many people tend to jump feet first into whatever shiny new thing that appears on the tech landscape, heedless of what the long-term ramifications might be and whether or not guardrails are not properly in place. Or considered at all. As it was with the Age of Social, which might more appropriately have been named the Age of Surveillance, given the wholesale erosion of our privacy at its core.
AI is being worshipped as a god, Carlson observed, and it seems that someone at OpenAI at lease seems to have something of a god complex and not naming names. But you know how it goes: the Lord giveth and like it or not and ready or not, the Lord taketh away. From what we’re only just beginning to witness, all we can say is heaven help us all. Onward and forward.