- By Alex David
- Sat, 26 Jul 2025 06:51 PM (IST)
- Source:JND
Instagram has introduced a new layer of safety features for users aged 13–16 in India, working to foster a safer environment for teenagers in the country. As part of Meta’s safety initiative from July 2025, these features will now automatically set teen accounts to private, limit who is allowed to contact them, and impose new limits for guardians.
Considering the recent rise of concerns surrounding India’s looming threat of online harassment and the exploitation of teens, the country stands as one of the most important Instagram markets. This paradigm shift is designed to provide both Indian adolescents and their parents a hands-on approach to the management of the teenager’s online experience.
Key Privacy Updates for Teen Accounts
1. Private by Default
Automatically makes teen accounts private. All accounts created by users under 16 are now private by default, enabling them to view, tag posts, and share only with those who are accepted users.
2. Messaging Restrictions
Teen users cannot be contacted through Direct Message by strangers.
Chats are now embedded with safety warnings that display the user’s account information, such as date of creation and location, ensuring users are aware.
3. One-Tap Block and Report
The ability to block and report accounts that interact maliciously has now been simplified to one tap.
4. Parental Approval for Certain Features
Going live or turning off the “sensitive image message” filters requires parents’ approval.
5. Screen Time and Sleep Mode
A new "Sleep Mode" silences notifications from 10 PM to 7 AM.
Users receive notifications for and are reminded of usage limits set at an hour.
Parental Dashboard for Monitoring
Instagram’s new parental dashboard facilitates informed parental supervision of their teens’ online activity.
Parents are enabled to supervise their child’s chat participants, modify security restrictions, and monitor usage of the app.
Meta claims that this feature attempts to maintain a balance between privacy, safety, and not being too invasive.
Tackling Harmful Behaviour on Meta Platforms
Meta reported more than 630,000 Instagram and Facebook accounts that violated child safety rules between June 2020 and June 2025.
Teenagers also voluntarily blocked over 1 million accounts that were flagged as suspicious.
Another million accounts reported to be suspicious were also blocked after being flagged as harmful.
Teenagers are now automatically warned when a suspicious or newly created account attempts to interact with them.
ALSO READ: UPI To Get Major Updates From August 1, 2025: New Rules For Balance Checks, AutoPay, And More
Why This Matters
This reflects movement toward safety and proactive measures on Instagram’s part. By putting protections within the app, the onus does not fall on parents or users to set up complicated controls.
With millions of young users on Instagram from India, these new safety features have the potential to greatly enhance online wellness—but constant supervision is still critical, experts warn.
Final Thoughts
Instagram's recently introduced parental supervision tools to safeguard teens are an admirable attempt at providing adolescents with protection against malicious content and online predators. Though not providing complete shielding from potential danger, these measures do improve privacy settings, parental supervision, and provide notifications about potential danger.
As safety on the internet takes the front seat for most tech companies, Instagram’s initiative may redefine industry benchmarks for the protection of minors on social networks.