Meta is now using AI to check ages at scale. The company has expanded its visual analysis age assurance system across Instagram and Facebook, with the deployment confirmed across multiple independent sources including TechCrunch and Meta’s own newsroom. Accounts the system assesses as underage are deactivated and required to complete formal verification before regaining access.
That much is confirmed. The interesting part is how the system works, and how Meta has chosen to describe it.
What the System Does
Meta describes the system as analyzing “general themes” in visual content, including physical cues such as height and bone structure, rather than identifying specific individuals. According to reporting by Cybernews, the system also incorporates textual analysis of profile content, mentions of school grades and birthday references among them. The combination gives the model signals about whether an account belongs to a minor, without (per Meta’s framing) identifying who that person is.
Meta explicitly distinguishes this from facial recognition. That’s not a throwaway line in a press release. It’s a legal position.
Why “Not Facial Recognition” Is the Compliance Story
Under the EU AI Act, systems that perform biometric categorization, inferring characteristics such as age from physical attributes, sit in a category that attracts specific obligations. The distinction between “identifying an individual” and “inferring characteristics about a category of person” is where the legal boundary lives, and it’s a boundary that EU regulators and biometric researchers don’t yet have uniform agreement on.
Meta operates under DSA Very Large Online Platform obligations, which include age assurance requirements. The company’s visual analysis system is one compliance response to those obligations. But the same technical approach, used by a smaller platform operator without Meta’s legal resources and regulatory engagement history, would face the same EU AI Act analysis without the same institutional context.
One practical gap the announcement doesn’t address: what happens at the boundary cases. A system analyzing height and bone structure will produce false positives and false negatives. Meta hasn’t publicly disclosed error rate data, what the formal verification process entails for falsely flagged adult users, or how appeals are handled. For a deployed system making consequential account decisions, that information matters for any DSA or EU AI Act conformity assessment.
Context and Pattern
This deployment isn’t happening in isolation. The UK Online Safety Act includes age assurance provisions that took effect earlier this year, as covered in our prior coverage of fast-track children’s AI restrictions. US state laws are moving in similar directions. Meta’s visual analysis approach represents one technical model for satisfying these requirements, and it’ll be referenced by regulators and other platforms evaluating their own approaches.
What to Watch
The EU Commission’s DSA enforcement posture on age assurance is the clearest signal to track. If Meta’s approach receives regulatory acceptance, it establishes a de facto standard for visual-analysis age verification. If it attracts scrutiny on biometric categorization grounds, it reframes the legal exposure for every platform using comparable methods. A second signal: whether the EU AI Act high-risk systems list gets updated guidance specifically addressing physical characteristic analysis systems. Third: how biometric researchers respond to Meta’s “not facial recognition” framing in peer-reviewed contexts.
TJS Synthesis
The real question this deployment raises isn’t whether Meta’s system works. It’s whether “analyzing general themes including physical cues” clears the threshold for biometric data processing under GDPR and biometric AI system classification under the EU AI Act, and whether Meta’s framing of that distinction holds up under regulatory scrutiny. Platform operators evaluating comparable deployments shouldn’t borrow Meta’s legal framing without independent analysis. The “not facial recognition” argument may be technically accurate and legally insufficient at the same time.