- By Prateek Levi
- Fri, 20 Jun 2025 06:18 PM (IST)
- Source:JND
Social Media Ban: Australia has taken a major step when it comes to the protection of children using social media. This might also make Australia the first country that would be enforcing a nationwide ban on social media access to children under 16. It is very likely that Australia will move forward with this ban, as a government-backed trial found out that age verification technology can work privately and effectively.
The Age Assurance Technologies Trail was conducted, and the results could pave the way for sweeping new regulations aimed at keeping kids off adult social media platforms.
Over 1,000 school students and several hundred adults took part in the trial, which was led by the UK-based nonprofit Age Check Certification Scheme (ACCS). The goal? To find out whether modern age verification tools can accurately determine someone’s age without harvesting too much personal data.
According to ACCS CEO Tony Allen, the answer is yes—at least in principle.
“There’s no significant tech barrier to age assurance in Australia,” Allen said during an online briefing. While he acknowledged that “no system is perfect”, he added that “age assurance can be done in Australia privately, efficiently and effectively.”
That emphasis on privacy is key. Although some tools may gather more data than necessary, Allen cautioned against going too far: “There’s a risk some solutions over-collect data that won’t even be used. That’s something to watch.”
So how would the system actually work in practice?
The proposed model relies on multiple layers to verify a user’s age. First, traditional ID-based checks—like using a passport or driver’s licence—are verified through independent services, ensuring platforms themselves never handle the documents directly.
Next, biometric estimation comes into play. A selfie or short video is analysed by AI to estimate the user’s age. This process is quick and doesn't store biometric data, adding an extra layer of verification without creating a privacy risk.
A third element, known as contextual inference, uses indirect signals like email domains, language, and online behaviour to further gauge age. It’s not accurate enough to stand alone, but it strengthens the overall framework when combined with other methods.
Put together, the system is designed to make it much harder for underage users to slip through while maintaining respect for individual privacy.
Starting in December 2025, major platforms like Instagram, TikTok, Snapchat, and X will be legally required to take “reasonable steps” to keep children off their services. Failing to comply could cost them—penalties could reach A$49.5 million (about US $32 million) per breach.
Some platforms, such as YouTube, WhatsApp, and Google Classroom, are currently exempt from these requirements.
The rest of the world is watching closely. Countries including the UK, New Zealand, and EU members are keeping tabs on how Australia rolls this out, as they explore similar safeguards for children online.
The trial is already being hailed by the Australian government as a significant milestone. A spokesperson for the eSafety Commissioner’s office called the findings “a useful indication of the likely outcomes from the trial” and noted that when implemented well, the technologies “can be private, robust and effective”.
Still, it’s not all smooth sailing. Kids are known for finding workarounds—VPNs, shared devices, or borrowing someone else’s login could all undermine these systems. It’ll now be up to the platforms to find ways to detect and stop those tactics, a level of responsibility many have never been held to before.