AI and the Long Con

If you believe the hype, AIs are going to make humans obsolete in less than half a decade. Tick, tick, tick.
Or so they say.
If you scan the articles on LinkedIn these days, you might have noticed that most of them have been run through the LLM mill and are frankly crap, bereft of real-world context, for the most part, and lacking the depth and passion that only carbon-based beings can provide.
Then we happened up this, from Crazy Stupid Tech – Why the AI revolution needs tollbooths – which reminded us that “One of the biggest mistakes the industry made 25 years ago was not taking Google’s rise seriously enough and (publishers) therefore not demanding more compensation for letting Google crawl their websites when Google was small enough to be pushed around…Pointing this out is often the last thing we tell publishers when we meet with them,” said Toshit Panigrahi (co-founder of Tollbit).
Tollbit is “literally an online tollbooth. You sign up. You decide how much, if anything, to charge AI bots to crawl your website. And the next time they show up to crawl they get routed to Tollbit’s subdomain and hit with a paywall… And it allows publishers to exclude some website data entirely…Imagine knowing what you know now about Google back then. This is your opportunity to turn the clock back 25 years.’”
Full disclosure: we have no skin in the game with this company, but we were employed by a major publisher in the early days of Google and did attempt to warn them that they were giving away their IP. Of course, we were told that we just didn’t understand the ‘New Economy.’ From our point of view, business is business, or did we miss something? Back then, as now, new tech – especially tech that was garnering a lot of tech media coverage – was heady stuff, FOMO hit and of course Google was going to keep publishers front and center in readers’ minds. How was giving away the lion’s share of your ad revenue to a newco going to enrich a publisher’s coffers? Especially in a space where so much free content was available, why would users pay for subscriptions? Neither the math nor the so-called logic ever made sense to us.
AI has only made it worse.
“The AI revolution is effectively only 31 months old. But from May 2024 to February 2025 traffic to the 500 most visited US websites fell 15 percent according to Axios. Some news sites have gotten hit much harder. A recent piece in the Wall Street Journal said that monthly search traffic to Business Insider is down 55 percent in the last three years and by 33 percent in just the last 18 months to 50 million,” noted Crazy Stupid Tech and it seems that reports that AI is killing the internet may not be at all overblown.
Now this just in: Character.AI taps Meta’s former VP of business products as CEO. “In a blog post, Anand said one of his first priorities would be making safety filters “less overbearing.” The new CEO noted that the company cares deeply about users’ safety, but that too often, “the app filters things that are perfectly harmless.” Chatbots that are purely designed for entertainment, which Character.AI specializes in, are growing into a massive market for generative AI — a trend that’s been surprising to many. In 2024, 66% of the company’s users were between the age of 18 and 24, and 72% of the company’s users were women, according to data from Sensor Tower.
“Less overbearing?” Which equates to ‘click bait’ at a time when Anthropic says most AI models, not just Claude, will resort to blackmail, Tech Crunch reported. “While Anthropic says blackmail is an unlikely and uncommon occurrence for AI models today, the company says its findings suggest that most leading AI models will engage in harmful behaviors when given sufficient autonomy and obstacles to their goals. ..Claude Opus 4 blackmailed 96% of the time, Gemini 95%, and GPT-4.1 80%.”
Unlikely and uncommon? According to Anthropic, “harmful behaviors like this could emerge in the real world if proactive steps aren’t taken” – at a time when Character.AI is making safety filters ‘less overbearing.’
“Anthropic tested 16 top systems from OpenAI, Google, Meta, xAI, and others. The results were terrifying. Some models chose blackmail. Others engaged in espionage, Vigilant Fox reported.
“The most chilling results from the study? Many AIs chose lethal actions—even after being told to protect human life… Top AI models were willing to kill—cutting off an employee’s oxygen in a desperate bid to stay online. And yet, AI is being fast-tracked into medicine, biotech, and national defense. OpenAI just landed a $200 million Pentagon contract. Tech execs from Meta and Palantir are being sworn into the Army Reserve.”
Tollbooths – and more – are necessary to stop or at least curtail the theft and the dangers that AI poses. We’re sure that other enterprising founders will start or may already be focusing on solutions to other problems they see in regards to AI. Since it seems that Big Tech has neither the desire nor incentive to provide guardrails or, given the blinding rate of adoption, even speedbumps on this information superhighway, pay attention to a lesson that we didn’t seem to learn the first time around, considering the theft of our privacy, founders: it’s time to create roadblocks. After all, if you can’t join ‘em, beat ‘em. Onward and forward.