AI in the Age of Social Media Blowback

AI in the Age of Social Media Blowback

Elon Musk has been warning us about the dangers of AI for quite some time now, saying that we need to regulate AI before it becomes a danger to humanity, “famously comparing work on AI to “summoning the demon,” and (warning) time and time again that the technology poses an existential risk to humanity, according to The Verge.

The tech community has a bad habit of shooting first and asking questions later, also known as ask forgiveness, not permission, which has led to data collection and invasion of privacy.  And in case you missed it, If you’re using an Android phone, Google may be tracking every move you make.

The tech cartel was certainly taken to task at the World Economic Forum at Davos last week, with Salesforce founder Marc Benioff suggesting that the seemingly addictive Googles and Facebooks and Twitters of the world – social media – be treated like a health issue, similar to tobacco and sugar.

“I think that for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive and we need to rein that back,” Benioff told “Squawk Alley, according to CNBC.

The Fox Guarding the Hen House, in Internet Time

There’s no doubt that some sort of regulation is coming, but forest through the trees: Facebook’s first president, Sean Parker, admitted to Axios that Facebook was designed to exploit human “vulnerability,” and for the record, while the company was founded in 2004, as of 2006, anyone over the age of 13 was allowed a presence on the platform. And it wasn’t until some 11 years later that Parker went public with Facebook’s manipulations.

We’re still in the relatively early days of Artificial Intelligence and among its biggest advocates are…the tech cartel, including Facebook, Google, Twitter, et al. You know, those guys who are currently under the microscope for their various manipulations, and certainly on the radar of various governments for their sometimes questionable business practices, the most notable being the $2.7B fine the EU levied on Google for its anti-trust practices.

As the Wall Street Journal noted, “Most arguments about “net neutrality” neglect an important reality: The internet most of us use is already far from neutral, thanks to the profit-focused algorithms and opaque content guidelines by which social-media companies such as Facebook , Twitter and Instagram govern their sites…We do know that Facebook’s experiments with shifting content guidelines sometimes hurt free speech and small businesses’ finances.”

Tech and the Law of Unintended Consequences

We create technologies without realizing unforeseen dangers. Case in point and who’d have thought: Fitness tracking app Strava gives away location of secret US army bases “Data about exercise routes shared online by soldiers can be used to pinpoint overseas facilities,” according to The Guardian.

The idea of bigger, better, faster certainly has served the tech community well and has no doubt created monolithic if not monopolistic platforms and companies that have been spiraling beyond our control. AI is a whole different beast and potentially far more dangerous, especially once it’s out in the wild. People, parents and governments are having a difficult enough time addressing the clear and present dangers that social media platforms pose, and yet which will be exponentially even more dangerous with AI in the mix, as Musk warned.

Facebook: Bringing the World Closer to…What?

If a platform like Facebook with its mere algorithms and a modicum of human intervention can’t even control so-called Russian hackers or bots, or be trusted to properly vet news, given their particular biases, how can they be trusted with AI, which can be hacked/potentially go rogue? We certainly can’t afford to wait eleven years before finding out the true dangers that have been unleashed in a worse case scenario. And let’s not forget that AI is already being embedded into robotics, killer robots, and autonomous cars without mechanisms for human intervention, and the tech community has never been big on making their platforms/products bullet proof, when it comes to security. Time to be far more mindful and circumspect. We’ve witnessed how effective self-policing has been. May be time for oversight from a council composed of representatives from the business community and government. It’s one thing when they have their finger on the pulse. Quite another if they could potentially have their finger on the button.

Onward and forward.

Comments are closed.