Home / News / Technology / Teens Gain Access to Google Gemini AI: What Could Go Wrong?
Technology
4 min read

Teens Gain Access to Google Gemini AI: What Could Go Wrong?

Published August 2, 2024 3:17 PM
James Morales
Published August 2, 2024 3:17 PM

Key Takeaways

  • Google is expanding Gemini access to more teens.
  • Only Google accounts registered to users 13 or over may use Gemini.
  • There are ongoing debates over the risks and benefits of chatbots in areas such as education and online safety.

In an update  on Thursday, Aug. 1, Google said it would expand Gemini access to more teens globally and introduce a teen-specific onboarding process.

The announcement highlights the growing normalization of chatbot usage among young people, which has sparked a debate over the risks and benefits of the technology in areas such as education and online safety.

Pros and Cons of AI in School

Chatbots can transform the educational experience, making it more efficient, engaging, and supportive for students.

As a learning tool, chatbots can help answer questions outside of school hours, tailor educational resources to individual needs and provide real-time feedback to students.

Of course, the technology is not without its detractors. After the initial success of ChatGPT in 2022, educational institutions around the world wrestled with the implications of the new tools.

Early responses  often centered on the charge that using chatbots for assignments undermined the need for original thinking. Another concern expressed by teachers was the fear that AI outputs may not be appropriate for minors.

Initially, school boards moved to restrict the technology. However, facing the reality that chatbot bans are nearly impossible to enforce, many educators have since softened their stance. 

Reconsidering Chatbot Bans 

In the US, the movement to widely ban ChatGPT began when the two largest school districts in the nation—New York City Public Schools (NYCPS) and Los Angeles Unified—blocked access  to the service from school Wi-Fi networks and devices. 

Other districts soon followed, but within months, the tide started to turn.

The first US school system to block access to ChatGPT, NYCPS, reversed its ban after four months. Drawing from consultation with tech industry representatives and experts in the field, the district changed course, introducing guidelines for chatbot use in the classroom instead.

Today, fears that chatbots could be detrimental to the quality of young people’s education haven’t completely subsided. But the initial crackdown has increasingly given way to an emphasis on responsible and appropriate use.

That being said, both in and outside of education, the question of AI safety remains in play.

Is AI Safe for Young People?

Because the widespread use of conversational AI is still so new, no one knows what its long-term consequences will be. Nevertheless, early research have already highlighted potential health and safety concerns.

For young people especially, mental and emotional well-being is tantamount. For instance, one study  suggested interactions with AI that generate human-like responses could undermine important social ties, potentially leading to isolation and loneliness.

Screenshot of Wysa app
AI-powered mental health apps like Wysa could help young people.

Proponents of mental health chatbots argue they can help mitigate an epidemic of teen depression and anxiety. But there is currently limited evidence  to support the efficacy of AI therapists.

There is also the worry that unchecked AI bias could exacerbate existing inequalities, compounding problems that already have a negative effect on many teens.

Teens AI Use Subject to Regulation

Given the concerns about chatbots’ impact on minors, regulators are wrestling to fit the new technology within online safety and child protection frameworks.

In the EU, Big Tech companies have been scrutinized for failing to protect teens from social media addiction. As the potential dangers of AI emerge, the same legislation could be used to regulate chatbot providers.

Of course, conversational AI has come a long way since some of the first models notoriously went rogue, generating a torrent of hateful and dangerous content. (Remember Microsoft’s Tay?)

Machine hallucinations are increasingly rare and extracting dangerous information from mainstream models like Google’s Gemini is harder than simply searching for it online the old-fashioned way. 

Nevertheless, considering the rapid and sometimes unpredictable way AI evolves, it is important to keep an eye on how teens use it and what the consequences might be.

Was this Article helpful? Yes No