Meta’s assault on privacy should serve as a warning against AI
Callum Kennard6 hours agoMeta’s assault on privacy should serve as a warning against AIFacebook was the worst thing to happen to user privacy over the last two decades. Artificial intelligence could be the worst thing to happen in the days ahead.483 Total views6 Total sharesListen to article 0:00OpinionJoin us on social networksIn an increasingly AI-driven world, blockchain could play a critical role in preventing the sins committed by apps like Facebook from becoming widespread and normalized.
Artificial intelligence platforms such as ChatGPT and Google’s Bard have entered the mainstream and have already been accused of inflaming the political divide with their biases. As foretold in popular films such as The Terminator, The Matrixand most recently, Mission: Impossible — Dead Reckoning Part One, it’s already become evident that AI is a wild animal we’ll likely struggle to tame.
From democracy-killing disinformation campaigns and killer drones to the total destruction of individual privacy, AI can potentially transform the global economy and likely civilization itself. In May 2023, global tech leaders penned an open letter that made headlines, warning that the dangers of AI technology may be on par with nuclear weapons.
Related:Girlfriends, murdered kids, assassin androids — Is AI cursed?
One of the most significant fears of AI is the lack of transparency surrounding its training and programming, particularly in deep learning models that can be difficult to expropriate. Because sensitive data is used to train AI models, they can be manipulable if the data becomes compromised.
In the years ahead, blockchain will be widely utilized alongside AI to enhance the transparency, accountability and audibility concerning its decision-making process.Chat GPT will make fun of Jesus but not Muhammad pic.twitter.com/LzMXBcdCmw— E (@ElijahSchaffer) September 2, 2023
For instance, when training an AI model using data stored on a blockchain, the data’s provenance and integrity can be ensured, preventing unauthorized modifications. Stakeholders can track and verify the decision-making process by recording the model’s training parameters, updates and validation results on the blockchain.
With this use case, blockchain will play a leading role in preventing the unintentional misuse of AI. But what about the intentional? That’s a much more dangerous scenario, which, unfortunately, we’ll likely face in the coming years.
Even without AI, centralized Big Tech has historically aided and abetted behavior that profits by manipulating both individuals and democratic values to the highest bidder, as made famous in Facebook’s Cambridge Analytica scandal. In 2014, the “Thisisyourdigitallife” app offered to pay users for personality tests, which required permission to access their Facebook profiles and those of their friends. Essentially, Facebook allowed Cambridge Analytica to spy on users without permission.
The result? Two historic mass-targeted psychological public relations campaigns that had a relatively strong influence on both the outcomes of the United States presidential election and the United Kingdom’s European Union membership referendum in 2016. Has Meta (previously Facebook) learned from its mistakes? It doesn’t look like it.
In July, Meta unveiled its latest app, Threads. Touted as a rival to Elon Musk’s Twitter, it harvests the usual data Facebook and Instagram collect. But — similar to TikTok — when Threads users signed up, they unwittingly gave Meta access to GPS location, camera, photos, IP information, device type and device signals. It’s a standard practice of Web2 to justify such practices, touting that “users agreed to the terms and conditions.” In reality, it would take an average of 76 working days to read every privacy policy for each app used by a standard internet user. The point? Meta now has access to almost everything on the phones of over 150 million users.
In comes AI. If the after-effects of the Cambridge Analytica scandal warranted concerns, can we even begin to comprehend the impacts of a marriage between this invasive surveillance and the godlike intelligence of AI?
The unsurprising remedy here is blockchain, but the solution isn’t as straightforward.
Related:The absurd AI mania is coming to an end
One of the main dangers of AI rests in the data it can collect and then weaponize. Regarding social media, blockchain technology can potentially enhance data privacy and control, which could help mitigate Big Tech’s data harvesting practices. However, it’s unlikely to “stop” Big Tech from taking sensitive data.
To truly safeguard against the intentional dangers of AI and ward off future Cambridge Analytica-like scenarios, decentralized, preferably blockchain-based, social media platforms are required. By design, they reduce the concentration of user data in one central entity, minimizing the potential for mass surveillance and AI disinformation campaigns.
Put simply, through blockchain technology, we already have the tools needed to safeguard our independence from AI at both the individual and national levels.
Shortly after signing the open letter to governments on the dangers of AI in May, OpenAI CEO Sam Altman published a blog post proposing several strategies for responsible management of powerful AI systems. They involved collaboration among the major AI developers, greater technical study of large language models and establishing a global organization for AI safety.
While these measures make a good start, they fail to address the systems that make us vulnerable to AI — namely, the centralized Web2 entities such as Meta. To truly safeguard against AI, more development is urgently required toward the rollout of blockchain-based technologies, namely in cybersecurity, and for a genuinely competitive ecosystem of decentralized social media apps.Callum Kennard is the content manager at Storm Partners, a Web3 solutions provider based in Switzerland. He’s a graduate of the University of Brighton in England.
This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.# Facebook# Analysis# Privacy# AI# Data# Opinion# Meta# ChatGPTAdd reactionAdd reactionRead moreWhat is profit and loss (PnL) and how to calculate itThe story of a crypto influencer’s attempt to bend realityChatGPT-coded smart contracts may be flawed, could ‘fail miserably’ when attacked: CertiK