
It may well be the best of times and the worst of times for age assurance. On the latter end of the spectrum, in July 2025, the United Kingdom rolled out new online age-gating requirements. The move triggered a 500,000-signature repeal petition within days, and downloads of some virtual private network (VPN) services spiked by over 1,000 percent. More recently, after Discord announced it would begin requiring age verification in March 2026, users threatened to leave in droves. High-profile breaches have only deepened public skepticism: In February, age verification vendor Sumsub disclosed a July 2024 breach it had only just discovered—18 months after attackers first gained access.
Yet age assurance—which broadly encompasses efforts to determine a user’s age online—is also unmistakably in vogue, particularly within the United States. The steady drumbeat of whistleblowers, exposés, research, and tragedies—now compounded by emerging artificial intelligence (AI) risks—has pushed states such as Arizona, California, Ohio, and Texas to adopt aggressive new age verification mandates. And in Free Speech Coalition v. Paxton, the Supreme Court upheld Texas’s age verification law as only an incidental burden on adult speech, emboldening lawmakers who once doubted the policy’s legal footing. On the technical side, age assurance and digital credentials have seemingly come of age, with some now concluding that privacy-preserving methods are technically mature and deployable at scale.
This turbulence reflects four conflicting convictions: first, that children face unique harms online; second, that children retain meaningful rights to speak and to access information; third, that parents struggle to steward their children’s digital environments; and yet—fourth—we recoil at any intervention resembling surveillance and censorship.