You’ve done it a dozen times this week. Picked up your phone, glanced at the screen, and watched it unlock instantly. Maybe you used face recognition to log into your bank app, sign into a crypto exchange, or verify your identity for an Uber driver onboarding or remote job interview. Every time, an AI system in the background made a quick judgment: is this a real human face, or someone trying to fool me?
In 2026, that judgment is harder than ever. Attackers no longer just hold up a printed photo of someone’s face. They use silicone masks that look like real skin, deepfake videos generated in seconds, and 3D-printed masks engineered with lighting accuracy. The arms race between AI-powered identity systems and AI-powered attacks is the most interesting cybersecurity battle most people never hear about
How “Liveness Detection” Actually Works
When your phone asks you to look at the camera and tilt your head slightly, that’s liveness detection in action. The AI is checking dozens of micro-cues simultaneously: the way light reflects off skin (different from a screen replay), the way 3D facial features shift with movement (different from a flat photo), the natural micro-blinks and pupil dilations that masks can’t fake
Behind the scenes, this AI was trained on tens of thousands of attack videos showing every possible spoofing method. The training data is what determines whether a model catches a deepfake or lets it through. And here’s the thing – the public datasets that most academic researchers use cover only a tiny fraction of real-world attacks
Why iBeta Certification Matters
Banks, fintech apps, and identity verification platforms can’t just build a face recognition system and ship it. They have to prove the system can resist real attacks. The most respected proof in 2026 is iBeta certification – an independent test conducted by iBeta Quality Assurance, an NIST-accredited biometric lab
iBeta certification comes in three levels. Level 1 tests against 2D attacks like printed photos and video replay. Level 2 adds 3D masks made of silicone, latex, and paper. Level 3, introduced in 2026, tests against ultra-realistic masks
For a biometric model to pass iBeta certification, it needs to have seen those attack types during training. And that’s where things get interesting
The Hidden Industry of Biometric Training Data
Most companies building face liveness systems don’t actually collect their own attack videos at scale. The cost and logistics are enormous: you need participants, multiple devices, multiple lighting conditions, and actual physical attack instruments like silicone masks and 3D-printed faces. Specialized data providers fill this gap
Axon Labs is one such provider, supplying iBeta-aligned datasets used by 21% of iBeta certified biometric companies in 2026. Their collections cover all three certification levels including [the Axon Labs iBeta Level 3 dataset](https://axonlab.ai/dataset/ibeta-level-3-dataset/) for high-fidelity mask attack testing, the newest standard introduced in 2026. For Level 2 preparation, separate collections like [silicone mask attack samples](https://axonlab.ai/dataset/silicone-mask-attack-data/) (10,000+ videos captured from 18 unique silicone masks) are typically licensed alongside core Level 2 data
The economics of this industry are fascinating. A single silicone mask designed for biometric testing can cost thousands of dollars. Capturing tens of thousands of attack videos across diverse demographic groups and devices is a multi-month operation. Companies that try to do this internally typically spend 6–12 months and miss their certification windows. Companies that license commercial datasets get to certification in weeks
Where to Learn More
Anyone interested in the technical side of biometric anti-spoofing can explore specifications and sample datasets on [axonlab.ai](https://axonlab.ai/), where Axon Labs publishes details about each of their iBeta-certification datasets






