Nothing says “competent government” like rolling out a national surveillance system that can be defeated by a $2 Halloween mask, yet here we are!
On 10 December 2025, Anthony Albanese’s Online Safety Amendment (Social Media Minimum Age) Act 2024 will be forced onto the entire country, requiring many Australians to verify their age just to keep using ordinary social media platforms. This is being sold as child protection, but a Senate Committee has already warned the technology isn’t fit for purpose and recommended delaying the rollout until mid-2026. Instead of fixing the failures, the Government is sprinting ahead with a system so flimsy that children overseas have bypassed identical checks using VPNs, AI filters and, yes, Halloween costumes. But Albo wants you to think this is “world-class policy.”
And if this really were about protecting children, someone in Canberra might like to explain why Bluesky, created and backed by Jack Dorsey, the former CEO of Twitter before Elon Musk took over, is exempt from the rules entirely. Bluesky is widely recognised by users and commentators as a platform dominated by “woke” subcultures, hyper-politicised communities and loose moderation standards, yet it somehow escapes the Government’s safety net. Even Pornhub, one of the world’s largest adult-content platforms, has been carved out of the legislation, while everyday Australians may be required to hand over biometric data just to log into Facebook.
As Australian lawyer Peter Fam and legal academic Dr Alexander Hatzikalimnios point out, the Commonwealth has once again chosen to hide behind private companies rather than enforce its own policy. As they wrote, the Federal Government is “forcing social media companies to enforce a ban on children as opposed to enforcing one themselves. This approach is cowardly.” It’s the same tactic used during Covid; outsourcing coercive rules to businesses to avoid legal accountability. They further explained the motive behind this outsourcing: “If the Government imposed these limits directly, they would risk complaints for breaching human rights obligations. By making companies carry the burden, the Commonwealth shields itself from liability.”
The pattern is clear: when a policy is unpopular, risky, or legally questionable, the Australian Government hides behind a middleman.
The underlying technology is so weak that experts are openly mocking it.
Senior Lecturer Dr Shaanan Cohney of the University of Melbourne delivered the most damning assessment, noting that every age-verification vendor they tested had at least one trivial workaround:
“Each vendor had one bypass that was easily accomplished with things you could buy at your local $2 shop,” he said.
That’s the level of competence we’re dealing with; Australia’s newest national online safety framework can be defeated by a child in a discount store.
And it isn’t theoretical, the UK already proved it. Under their Online Safety Act rollout, children were able to bypass age-verification almost instantly using VPNs, AI filters, scanned images and even video-game character faces. The British regulator itself was forced to acknowledge widespread workarounds and enforcement challenges within weeks of implementation.
The Senate’s suggested delay wasn’t a courtesy; it was a warning. The infrastructure cannot reliably perform the function Parliament expects it to. The delay recommendation reflects concerns about the system’s minimal effectiveness.
Despite this, the Government is rolling ahead, not because the system works, but because child safety has become a convenient pretext for expanding digital control. When a policy gives Canberra more power over how Australians access the internet, the flaws suddenly stop mattering.
And that power grab becomes even more obvious when you see who is forced to comply, and who quietly gets a free pass.
Under Section 63C of the Act, major platforms like Facebook, Instagram, TikTok, YouTube, Snapchat, Reddit, X (formerly Twitter), Threads, Kick and Twitch must enforce biometric age checks.
But through Section 63C(3), the Minister is allowed to exempt any platform from compliance. And they have.
Pornhub – Exempt
4chan – Exempt.
Bluesky – Exempt.
Telegram and Roblox – Exempt too.
If the objective were genuinely to protect children, the platforms hosting the most explicit or harmful content would not be the first ones exempted. But they were. Because this isn’t child protection, it’s selective regulation with a political flavour.
The Real Danger is Normalising Biometric Gatekeeping
Fam and Dr Hatzikalimnios warn that the social media ban is not just a broken policy; it is a gateway to something far more concerning. The legislation doesn’t introduce a Digital ID, but it does introduce the behaviour required to make Digital ID normal. As they explain:
“While the ‘social media ban’ doesn’t explicitly purport to implement Digital ID on any broad level, it does seek to normalize an expectation that a third party entity (in this case, corporations) will restrict a ‘class’ (in this case, children) from accessing their services on the basis that the State has decided that it is ‘unsafe’ for them to do so.”
Because Section 63DB of the Act prohibits platforms from collecting government ID documents, the only practical verification method becomes biometric age estimation, meaning Australians will now be conditioned to scan their face to access everyday digital platforms. This is the first time Australians will be explicitly asked to scan their face to prove they have permission to enter a public online space. Once this becomes normal, expanding it to banking, housing, government sites, travel or purchases becomes trivial for regulators.
Meanwhile, the Government ignored the one solution that would actually protect children online: the Senate’s proposed digital duty of care. As Dr Hatzikalimnios explains:
“A digital duty of care requires platform design to not only inherently minimise content or function-related harm but also severely restrict personal data extraction and prohibit algorithmic manipulation.”
But instead of regulating Big Tech, the Albanese Government chose to regulate ordinary Australians.
Where This Leaves Australians
The Social Media Ban isn’t a full-blown Digital ID, not yet anyway, but it is unmistakably the scaffolding for one. It seeds biometric checkpoints into everyday life, pushes Australians toward identity-controlled access to the online world, and hands the Minister the power to decide which platforms must submit and which ones walk away untouched. It quietly normalises surveillance and compliance, conditioning Australians to believe that logging into social media, something we’ve done freely for decades, now requires State-approved identity clearance.
If this were really about protecting children, these exemptions would make the legislation collapse under its own hypocrisy. They expose the truth: this isn’t about safety. It’s about control, dressed up in the language of responsibility and care to make it easier for the public to swallow.
And Australians have every right, and every obligation, to push back before this soft-launch of digital gatekeeping becomes the new normal.
Learn more at:
Senate Recommends Delaying The Social Media Ban (The Spectator) – Dr Alexander Hatzikalimnios
The Social Media Band for Kids (Maats Method) – Peter Fam & Dr Alexander Hatzikalimnios
E-Safety.gov.au – Which Platforms Are Banned