Key Takeaways
In the 2001 episode of Futurama, “I Dated a Robot,” the animated series’ protagonist Fry is warned that robot girlfriends threaten human civilization.
More than 20 years later, Artificial Intelligence (AI) has made what was once a far-off sci-fi concept a reality. And just as Fry was cautioned against the dangers of dating a robot, researchers are increasingly concerned about the potential social consequences of AI companionship.
In a recent article on social AI risks, MIT researchers at the Media Lab highlighted the rising prevalence of AI companionship.
Their analysis of a million ChatGPT interaction logs revealed that sexual role-playing was the second most popular use of AI. Meanwhile, increasingly customizable AI character creators are giving rise to a host of novel AI relationship options. These range from AI companions that replicate the personality of well-known celebrities a la Futurama, to chatbots designed to talk like deceased friends . But although the phenomenon is accelerating rapidly, no one knows what the long-term consequences will be.
“We’re seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole,” Robert Mahari and Pat Pataranutaporn noted.
Their concerns echo Futurama’s tongue-in-cheek warning that robot partners could lead to the breakdown of real human relationships. As they observe:
“AI wields the collective charm of all human history and culture with infinite seductive mimicry [its allure] lies in its ability to identify our desires and serve them up to us whenever and however we wish.”
Just like there is a growing recognition of the dangers of the social media dopamine loop, AI developers are becoming concerned about the addictive potential of a technology designed to fulfill users’ desires.
In the dystopian world depicted by Pataranutaporn and Mahari, AI creates “an echo chamber of affection that threatens to be extremely addictive.” And neither are they alone in highlighting the threat of AI addiction.
During an interview at The Atlantic Festival last year, OpenAI CTO Mira Murati cautioned that increasingly sophisticated AI models risk becoming “even more addictive” than the technology that exists today, “and we sort of become enslaved to them.”
Unlike traditional chatbots that communicate exclusively through text, the new generation of lifelike voice generators add a human element that makes it easier than ever to forge relationships with AI.
For instance, in a report outlining the safety assessment carried out prior to releasing GPT-4o, OpenAI flags “emotional perception and anthropomorphism risks” as a unique concern of its latest voice-enabled AI model.
Two decades after the launch of Facebook, regulators are starting to hold social media companies accountable for their products’ addictive qualities, starting with their impact on children and young people. Now, as teenagers’ access to AI expands, chatbot developers could become their next target.