Following the invention of deepfakes, which many fear will cause damage on a global scale if used by the wrong hands, a terrifying new AI-powered technology called "Fake Text" has emerged with the potential to wreak even more havoc. 'Fake Text' Introduces Completely Unprecedented Risks…
Following the invention of deepfakes, which many fear will cause damage on a global scale if used by the wrong hands, a terrifying new AI-powered technology called “Fake Text” has emerged with the potential to wreak even more havoc.
This alarming new technology uses AI to analyze text and then generate incredibly detailed and realistic written responses to it, giving the impression that an exchange between humans is taking place. The AI analyses text patterns to put together disturbingly lucid text, typified by this Reddit thread.
Launched by leading global AI research lab OpenAI, Fake Text is already recognized as so potentially dangerous that even its inventors have publicly warned about it. Speaking to the House Intelligence Committee recently, OpenAI policy director Jake Clark said that the organization expects Fake Text to significantly aid the production of fake news as well as impersonating personalities online and generating “troll-grade propaganda for social networks.”
The existing threats posed by fake news and deepfakes are indeed severe, but not insurmountable. Soon, it may be possible for ordinary people to differentiate between videos captured by a camera and digitally stitched deepfakes using a technology solution. Similarly, fake news can be combatted through careful curation of credible news media and clear signposting of news sources by Internet giants like Google, Facebook, and WhatsApp.
Fake Text, on the other hand, is an entirely different animal. Instead of just impersonating trusted sources and creating a public discourse based on lies and misdirection, it can also impersonate the audience having the discussion. This means, for example, that it is now possible to completely game the entire SEO system that ranks websites and promote fake news using the very tools designed to fight it.
Instead of going through the hard work of creating content and building credibility through a growing audience, a website can be artificially grown and boosted using AI-generated content that scales the human quality hurdle. Almost overnight, a fake news website headquartered somewhere in China can achieve top-level ranking in the U.S. if enough Fake Text bots can create content that links back to its website.
Another implication of Fake Text is that the scourge of troll bots spreading propaganda and carrying out “astroturfing” on social media will go positively meta.
Presently, it is reasonably easy to spot a bot account because the quality of its engagement is predictably low. Fake Text will change that almost overnight.
Once the technology is fully released, the bot accounts shilling for exit scam ICOs, dodgy health products, and political campaigns will graduate from being crude weapons targeting only the most gullible demographic to being sophisticated, well-spoken, intelligent-sounding accounts across all social media.
We are talking fake news on steroids and crystal meth.
In the movie Jurassic Park, Ian delivered the iconic line:
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
At some point, we are going to have to ask the question, “Should we?” Is there any compelling and inevitable reason why this technology should exist? This is a technology that not only promises to make it impossible to believe anything ever again, but also promises to replace core human activities that have never been automated before such as creating poetry, product reviews, and short stories.
Is this what the human species really wants for itself in 2019 – the power to disrupt itself for no reason? Now more than ever, it is time to start asking those hard questions.
Watch CCN’s Take on Deepfake Tecnology:
Disclaimer: The views expressed in the article are solely those of the author and do not represent those of, nor should they be attributed to, CCN.
Last modified: July 7, 2019 5:28 PM UTC