• Source:JND

Parents and lawmakers have long expressed concern about whether social media platforms like Meta are suitable for young users, with numerous lawsuits alleging harmful content, addictive features, interactions with adult strangers, and interactions between children and adult strangers posed a risk to millions of young users while publicly downplaying it. Internal research, former executive testimony and company documents paint an alarming picture of what was known at Meta but what was said outside. Now this case involving over 1,800 plaintiffs may reshape how children worldwide utilise social media.

What the lawsuit alleges

This case forms part of a larger multidistrict litigation between Meta, YouTube, Snapchat and TikTok and several plaintiffs – ranging from children, parents and schools/state authorities – who allege Meta pursued growth at any cost, endangering mental and physical safety for its users.

ALSO READ: Black Friday Deal: Apple iPhone Air Sees A Rare Rs 11000 Cut On This Platform

1. Harmful content was tolerated

Court documents claim Instagram allowed accounts linked to sexual exploitation to remain online far longer than accounts posting simple spam violations. A former safety executive described the strike threshold as “very high”, suggesting serious threats to minors were not removed quickly enough.

Meta maintains it uses strict policies, AI detection tools, and human moderators to remove such material.

2. Meta misled the public about teen mental health

Internal findings reportedly showed that reducing time on Instagram lowered anxiety and depression in teenagers. That data was not disclosed publicly, and lawmakers were reportedly given incomplete or minimising responses.

Meta says it continues to share research and has launched tools like parental controls and content limits.

3. Teens were exposed to adult strangers

The filings allege that millions of teenagers received inappropriate messages or interactions from unknown adults. Recommendations to make teen accounts private by default were delayed over growth concerns. The introduction of Reels supposedly amplified the problem.

Meta now defaults teen accounts to private and restricts unsolicited adult messaging.

4. Young children were targeted to boost engagement

Documents suggest Meta explored ways to attract preteens, comparing the strategy to historical cigarette marketing. Teams reportedly discussed encouraging earlier smartphone use by working through schools.

Meta says it blocks users under 13 and designs products with safety at the core.

5. Safety features were delayed or blocked

Initiatives such as hiding likes, limiting beauty filters, and reducing addictive usage patterns were allegedly stalled because they risked lowering engagement metrics. Internal researchers reportedly warned that these changes would help teen well-being.

Meta says it routinely updates safety features and points to Instagram Teen Accounts as proof.

6. Harmful self-harm and eating-disorder content stayed online

The lawsuit asserts that AI systems were only effective at eliminating obvious violations, leaving large volumes of sensitive content visible to young users. Employees had requested more transparency surrounding these risks from the company.

Meta says AI is paired with human reviewers and is constantly improving.

7. Meta knew its platforms could be addictive

Internal notes reportedly described Instagram as “a drug”, acknowledging its compulsive design. Despite this, Meta publicly dismissed the scale of problematic use and delayed features intended to curb overuse because they might reduce time spent on the platform.

Meta says it focuses on “problematic use”, not addiction, and offers tools like Quiet Mode and parental supervision.

ALSO READ: WhatsApp Voice Message Transcripts Roll Out Globally With Full On-Device Privacy

Why this case matters

This lawsuit provides a unique glimpse of how global social media companies balance user safety against engagement. For families in India – where teenage social media usage has skyrocketed – this case raises valid concerns about exposure to harmful content as well as using supervision tools, privacy settings, and time-management features for better user protection.

Meta’s response

Meta is committed to keeping its teen users safe by offering AI moderation tools, separate accounts for teens and parental controls – while denying any attempt to harm or mislead users or the public.

Final thoughts

This lawsuit could set a global precedent for how social platforms protect minors in the future. While its outcome will ultimately rest with the courts, its significance for families already stands out – online safety requires transparency from platforms as well as active supervision from parents and regulators alike.

Also In News