Age Assurance in 2026: From Policy Debate to Enforcement Reality

Age assurance is no longer an abstract trust-and-safety discussion. It is now an enforcement issue, a product-design issue, and increasingly a board-level compliance issue. Over the past several months, regulators, courts, and major platforms have all moved in the same direction: away from simple age gates and toward age-assurance systems that can be defended as effective, proportionate, and auditable.

That is the real shift between late 2025 and March 2026. The question is no longer whether a platform has an “Are you over 18?” popup. Regulators increasingly want to know whether the method used can actually separate adults from minors in the context that matters, whether the resulting decision can be logged and explained, and whether the platform minimized the personal data collected to get there.

The FTC moved from caution to conditional permission

One of the most important U.S. developments came from the FTC in February 2026. The agency issued a COPPA policy statement saying it would not prioritize enforcement where operators collect personal information strictly for age-verification purposes, use it only for that purpose, and delete it promptly afterward.

That is not a blanket exemption, and it is not the same thing as a final new rule. But it is a major signal that federal regulators no longer want COPPA uncertainty to block stronger age-assurance adoption.

That matters because one of the biggest objections to online age verification in the U.S. has long been the fear that collecting more information to verify age would itself create more privacy risk. The FTC is now effectively saying that limited, purpose-bound, promptly deleted age-verification data can be consistent with COPPA.

The UK and Australia are proving this is not theoretical

The UK has already drawn a hard line on ineffective methods. Ofcom’s guidance states that self-declaration, such as simply entering a birthdate, is not capable of being “highly effective” for services that need reliable child protection. Its enforcement program has been active since 2025, and UK regulators have pushed major platforms to move beyond self-declared age and use viable technology instead.

Australia has gone even further in visible enforcement terms. Its under-16 social media law took effect in December 2025, with fines reaching A$49.5 million for noncompliant platforms. In January, officials said more than 4.7 million under-16 accounts had been deactivated, removed, or restricted, though more recent reporting shows that significant underage use remains and enforcement is still imperfect.

The point is not that the system is solved. It is that regulators are now measuring results, not just policies on paper.

Courts are reinforcing the move toward stronger checks

The Supreme Court’s June 2025 decision in Free Speech Coalition v. Paxton remains the biggest judicial turning point in the United States. By upholding Texas’s age-verification law for sites with a substantial amount of material harmful to minors, the Court gave states a far stronger foundation to impose age-assurance obligations on adult-content services.

That decision has influenced a much wider wave of state legislation and enforcement activity.

At the same time, courts are also shaping where that responsibility sits. In December 2025, a federal judge blocked Texas’s app-store age law, finding it likely violated the First Amendment. That injunction matters because it shows that lawmakers want age assurance to move upstream to app stores and device layers, but the legal path is still contested.

California’s AB 1043 is another sign of that upstream shift. Beginning in 2027, the law requires operating system providers to collect age or birthdate during setup and provide apps with an age-bracket signal through an API. That is not the same thing as forcing each app to verify age independently. It is a move toward persistent, reusable age signals at the platform layer.

The technical market is changing just as fast

The technology side is moving with the law. Vendors including Yoti, Persona, and k-ID are benefiting from growing regulatory demand as facial analysis, ID checks, and other age-assurance tools become cheaper and more accurate. Major platforms are also adopting these tools in different ways, not because one method has won, but because different products now need different assurance levels.

One example is OpenAI’s January 2026 rollout of age prediction on ChatGPT. The company said it is using age prediction to help determine whether an account likely belongs to someone under 18, and users incorrectly placed into the under-18 experience can recover access through a Persona selfie check.

That is exactly the kind of layered model regulators are pushing toward: lighter estimation first, stronger verification when needed.

Another example is the OpenAge initiative and its AgeKey concept. This is better understood as an emerging industry effort than as a mature standard. The significance is not that reusable age tokens are already universal. It is that the industry is actively trying to reduce repeated ID uploads by creating portable, privacy-conscious age signals.

What the new reality means for platforms

Taken together, the legal and technical developments point to one conclusion: self-attestation is rapidly losing credibility for higher-risk use cases. That does not mean every platform now needs the same method. It means platforms need a risk-based age-assurance strategy.

For adult content, the pressure is toward hard verification or highly reliable age assurance. For social media and AI, regulators increasingly expect age-aware experiences, re-checks for suspicious accounts, and stronger controls for minors. For app ecosystems, the long-term trend is toward reusable signals from app stores, operating systems, or digital identity layers rather than repeated proofing at every destination.

The durable answer is not a checkbox and not full KYC for everyone. It is a policy-driven system that can match method to risk, minimize retained data, and produce a defensible record of what rule ran and why.

That is what regulators increasingly mean by “effective” age assurance.

Age Verify helps platforms move beyond self-attestation with browser-based age assurance, configurable policy rules, and stronger fallback paths where law, product risk, or jurisdiction actually requires them.

Further resources