Australian Social Media Ban for Teens: Censorship vs Safety

Cassandra from Mollymook had a problem.
And I’m pretty sure this one isn’t covered in any parenting book.
Her 14-year-old son was supposed to lose access to Snapchat on December 10. That’s when Australia’s world-first social media ban kicked in. No accounts for anyone under 16, period.[13]
So her kid did what I probably would’ve done at that age. He went into his Snapchat settings, changed his birthday to make himself older, and took the k-ID selfie to verify his age.
The system looked at his 14-year-old face and decided he was 23.
Cassandra found out and tried to fix it. She logged into his account to change his birthday back. But Snapchat wouldn’t let her. Turns out you can only change your date of birth a limited number of times.
Her son had beaten the ban. And now he was locked in as a verified adult.
Look, I get why Australia did this. Their government research found that 96% of kids aged 10 to 15 were on social media, and 70% of them were seeing genuinely harmful stuff. Content promoting eating disorders and suicide. Violent videos. Grooming from adults. Plus more than half experienced cyberbullying.
We’re talking about real harms happening to millions of kids.
So Australia built a digital wall. They passed a law with $33 million fines for platforms that don’t keep kids out. They told Meta, TikTok, Snapchat, and the rest: figure out how to enforce age, or pay up.
But here’s what nobody wants to say out loud: enforcing biological age on the internet might be fundamentally impossible without turning the whole thing into a surveillance state.
Because what happened to Cassandra’s son? That was just the facial recognition tech working exactly as designed. It looked at a teenager and made its best guess. It was wrong by nearly a decade, but it did its job. The platform took “reasonable steps.” The law was satisfied.
And the kid stayed on Snapchat anyway.
I’ve been watching this closely since December 10 because Australia is first, but they won’t be the last.
Denmark wants to ban under-15s by mid-2026. Norway and Malaysia are considering similar moves. If Australia figures this out, the rest of the world gets a blueprint. If they don’t, we all just watched countries waste resources building surveillance infrastructure that doesn’t actually protect kids.
So let me break down how platforms are trying to enforce this thing and why the technology keeps failing in spectacular ways. Then we’ll dig into what this mess tells us about the impossible problem of putting age limits on digital spaces.
The Law: “Reasonable Steps” Meets Reality
The Online Safety Amendment Act doesn’t ban kids from using social media. It bans platforms from letting them have accounts. That distinction matters.
If a 15-year-old figures out how to get on Instagram, they won’t face any penalties. Neither will their parents. The entire legal weight falls on the platforms themselves, which must take “reasonable steps” to prevent anyone under 16 from holding an account [1].
That phrase, “reasonable steps,” is doing a lot of heavy lifting. The law doesn’t specify what technology platforms must use or how accurate it needs to be. It just says they have to try, and they have to prove they tried if the eSafety Commissioner comes knocking with a $33 million fine [2].
So what counts as reasonable? Turns out, not much. Facial recognition that mistakes a 14-year-old for someone in their twenties? Reasonable. Algorithmic guesses based on user behavior that get it wrong half the time? Also reasonable. The bar isn’t “keep all kids out.” It’s “make a documented effort.”
The platforms subject to the ban are the usual suspects: Instagram, Facebook, TikTok, Snapchat, X, Reddit, YouTube, and live-streaming services Twitch and Kick [3]. But, weirdly enough, WhatsApp and Messenger are exempt because they’re “private messaging.” Discord is exempt because it’s categorized as a “gaming communication tool.” Roblox, despite being a massive social platform where millions of kids interact, is exempt because it successfully argued it’s a game [4].
The logic behind these exemptions is tortured at best. A teen can’t have an Instagram account to follow their school’s announcements, but they can spend 12 hours a day on Discord servers with strangers. They’re locked out of YouTube (though they can still watch videos without an account), but they can chat with random people in Roblox. The government is essentially saying: social media bad, but only specific kinds of social media.
So, how did the ban actually roll out? Meta started deactivating accounts for users it identified as under 16 in early December, using what it calls “constructive knowledge.” That’s tech-speak for “we’ve been watching what you do and we think you’re a kid” [5]. TikTok reported deactivating over 200,000 Australian accounts in the first week [6]. Snapchat took a different approach, introducing a “frozen state” where underage accounts are locked but not deleted, preserving users’ photos and memories until they turn 16 [7].
But the question everyone’s asking is: how do you actually verify someone’s age online?
When Facial Recognition Meets Teenage Ingenuity
The government spent $6.5 million on an Age Assurance Technology Trial to figure out which verification methods actually work [8]. The results? Every option sucks in its own special way.
🔒This is where the free preview ends.
Join thousands of professionals who get the complete story
Our Deep Dive subscribers rely on these investigations to stay ahead of emerging threats and make informed decisions about technology, security, and privacy.
✅ Complete access to this investigation
✅ All future Deep Dive reports
✅ Searchable archive of past investigations
✅ No ads, no sponsored content
✅ Cancel anytime


