Home / News / Technology / Big Tech / Meta Expands Teen Safety Features on Instagram in Response to Online Safety Act
Big Tech
3 min read

Meta Expands Teen Safety Features on Instagram in Response to Online Safety Act

Published
James Morales
Published

Key Takeaways

  • Meta is expanding Teen Accounts to Facebook and Messenger.
  • It is also introducing new features for Instagram Teen Accounts.
  • Regulations like the U.K.’s Online Safety Act have prompted Meta to improve its child protection measures.

Half a year after Meta introduced dedicated Instagram accounts for teens, the company has expanded the offering to Facebook and Messenger.

With built-in protections for young users, Teen Accounts are a key part of Meta’s strategy to comply with regulations like the U.K.’s Online Safety Act.

Teen Accounts Expansion

Across its social media platforms, Meta’s Teen Accounts offer several features to protect under 16s.

Teen Accounts are private by default and restrict users’ exposure to sensitive content, such as content that shows people fighting or promotes cosmetic procedures. They can also only be messaged by people they follow or are already connected to.

The latest update will bar teens from Instagram Live unless their parents grant them permission. Meanwhile, a new feature will automatically blur images containing suspected nudity in direct messages.

Responding to Online Safety Concerns

Meta initially launched Teen Accounts after coming under fire for not doing enough to protect young people from social media harms like addiction, online bullying and sexual exploitation.

In the U.S., Congressional scrutiny and a rising tide of litigation have prompted the firm to enhance its child protection measures.

Meanwhile, in the U.K., the 2023 Online Safety Act codified platforms’ responsibility to provide age-appropriate services.

Social Media Age Verification

By one measure, Teen Accounts have been broadly successful, with 97% of users aged 13–15 keeping default restrictions in place.

However, the parental control system relies on users being truthful about their age.

The Online Safety Act requires platform operators to use some form of “age verification or age estimation.”

Age verification tools generally require a user to show some form of identification to access features reserved for adults. Meanwhile, age estimation relies on artificial intelligence to assess each user’s age based on an uploaded video.

With key compliance deadlines for the Online Safety Act looming, the U.K.’s media regulator, Ofcom, has warned social media companies that they must provide “highly effective age assurance” measures or face penalties.

As more jurisdictions enact similar legislation, debates over the best way to verify age are ongoing.

How Meta and its peers respond to the Online Safety Act is therefore of vital importance to countries like Australia that are pursuing their own online safety agendas.

Was this Article helpful? Yes No
Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more
loading
loading