Instagram (owned by Meta, considered extremist in Russia and banned) is introducing new restrictions on teen accounts as part of its efforts to protect young users from inappropriate content. This was reported by TechCrunch.

By default, users under 18 will only see content rated PG-13, which excludes themes related to excessive violence, explicit nudity, and drug use. This setting cannot be changed without the explicit consent of the parent or guardian.
The social network is also introducing a stricter content filter called Restricted Content. This feature will prevent teens from viewing or leaving comments on posts that have this setting enabled. Starting next year, they plan to impose tighter restrictions on the conversations teens can have with AI bots that have the “Restricted Content” filter enabled. These settings are now applied to AI chats.
The move comes amid lawsuits against chatbot developers like OpenAI and Character.AI for allegedly harming users. Previously, OpenAI introduced new restrictions on ChatGPT users under 18 and trained its chatbot to limit “flirting conversations.” Earlier this year, Character.AI also added new limits and parental controls.
Instagram, as it continues to develop safety tools for teens in various sections, including private messages, search feeds and content, is expanding its controls. The service will ban teens from following accounts that distribute age-inappropriate content. If such subscriptions already exist, users will not be able to view or interact with their content, and vice versa. Additionally, such accounts will be excluded from recommendations, making them harder to detect.
The platform will also block teens from accessing inappropriate content that may be sent to them in private messages. Previously, Meta (recognized as extremist and banned in Russia) restricted detection of content related to eating disorders and self-harm. The company now blocks words like “alcohol” and “blood” and takes steps to prevent teens from finding content in those categories, even if misspelled.
The company is testing a new way that lets parents use parental controls to flag content that shouldn't be encouraged for teens. Flagged publications are sent to a special team for review.
Rollout of these changes has begun in the US, UK, Australia and Canada and will roll out globally over the next year.
Previously, Russians were instructed how to recognize a scammer in the guise of a child trying to lure money.