Meta Expands Teen Account Features to Facebook and Messenger
Meta has expanded its Teen Accounts feature to include Facebook and Messenger, building on the privacy safeguards already available on Instagram.
The move, outlined in a recent update, aims to create a safer online experience for younger users across its platforms.
Teen Accounts will feature enhanced privacy settings, restricting exposure to inappropriate content and limiting unwanted interactions.
This expansion is part of Meta's ongoing efforts to address criticism over its protection of young users from online risks.
The new features, initially launching in the US, UK, Australia, and Canada, will soon be available in additional countries.
Key updates include restrictions on live video hosting for users under 16, parental approval required for sharing explicit images in direct messages, and stronger content filters.
For users under 18, the platform will prevent sensitive content from being displayed, restrict messaging with certain accounts, and keep profiles from being publicly discoverable.
While teens aged 16 and 17 can adjust some settings, younger users will need parental consent to make changes.
Among the new features are:
Messaging Limits: Teens can only receive messages from accounts they follow or are connected with.
Sensitive Content Filters: The most restrictive filters are enabled by default.
Interaction Restrictions: Only users who follow teens can tag or mention them, with offensive language automatically filtered.
Time Limits: Teens will be prompted to take breaks after 60 minutes of daily use.
Sleep Mode: Automatically activates from 10 pm. to 7 am., silencing notifications and sending auto-replies.
Parental Controls: Guardians can monitor recent contacts, set usage limits, and block access during specific hours.
Meta reports that 97% of teens aged 13 to 15 have opted to keep these protections in place, and more than 54 million users currently benefit from Teen Accounts.
Importantly, Meta’s teen-specific content filters will override recent policy changes allowing limited hate speech under certain circumstances.
For users under 18, content targeting transgender or non-binary individuals with derogatory language will remain blocked.
A Meta spokesperson said:
“There is no change to how we treat content that exploits children or content that encourages suicide, self-injury or eating disorders, nor do our bullying and harassment policies change for people under 18.”
This expansion reflects Meta's ongoing response to growing scrutiny from lawmakers, parents, and regulators regarding its responsibility to safeguard young users online.
New Safety Features for Teen Accounts on Instagram
Meta is enhancing its Teen Accounts with additional safeguards, particularly for Instagram Live and direct messages (DMs).
Starting soon, teens under 16 will need parental approval to broadcast live, ensuring more oversight for younger users.
Additionally, the company is introducing a new requirement for parental consent before teens can disable the feature that automatically blurs images suspected of containing nudity in DMs.
These updates, set to roll out in the coming months, aim to further protect teens from potential online risks while promoting safer engagement on the platform.
Judge Greenlights Key Claims in Landmark Legal Case
US District Judge Yvonne Gonzalez Rogers recently issued a 102-page decision allowing many consumer protection claims from 34 state attorneys general to proceed against Meta.
These claims focus on violations of the Children’s Online Privacy Protection Act (COPPA), which mandates parental consent before collecting data from users under 13.
Meta had attempted to dismiss these claims, arguing that Facebook and Instagram are not aimed at children.
However, Judge Gonzalez Rogers disagreed, ruling that content hosted on platforms—even if posted by third parties—can be considered directed at children.
The judge also found that Meta’s design and implementation of certain product features, such as infinite scroll and autoplay, could reasonably be seen as unfair or unconscionable under both federal and state laws.
However, she acknowledged the protection afforded to Meta under Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content.
As a result, some of the features cited by the states, like those encouraging prolonged engagement, remain protected under Section 230.
Nonetheless, the judge noted that other features, such as appearance-altering filters, time-restriction tools, and Instagram’s multi-account function, do not fall under Section 230 protections.
These features, which do not involve the publication of third-party content, are not shielded from liability, as clarified in her previous ruling from 2023.