Intelligence

Biometrics are not as safe as you think

Discover why biometrics can be hacked and how deepfakes, AI spoofing, and data breaches are reshaping biometric security.
Marouane Sabri
Defendis Co-founder

Biometrics Are Convenient, But They Are Not Unhackable

Biometric authentication has become one of the most widely adopted security technologies in the world. Unlocking a phone with a fingerprint or facial scan feels seamless, personal, and secure. There are no passwords to memorise, no PINs to type, and no obvious friction in the process. Because biometric data is tied directly to the individual, many people assume it offers a level of protection that traditional credentials cannot match.

That assumption is increasingly dangerous.

Modern cybersecurity threats are no longer limited to malware or stolen passwords. Attackers are now exploiting the trust people place in biometric identity verification systems such as facial recognition, voice authentication, and fingerprint scanners. As artificial intelligence evolves, biometric spoofing and deepfake attacks are becoming more sophisticated, more scalable, and significantly harder to detect.

The idea that biometrics are unhackable is one of the most outdated myths in cybersecurity today.

Why Biometrics Feel More Secure Than Passwords

The trust surrounding biometric security did not appear without reason. Early biometric authentication systems represented a major improvement over weak passwords and poor credential hygiene. Technologies such as Apple’s Face ID and Touch ID made authentication faster and more user-friendly while reducing reliance on passwords that users frequently reused across multiple services.

Compared to simple passwords, biometric authentication often provides stronger protection. A fingerprint cannot easily be guessed, and facial recognition systems are designed to identify highly specific physical traits. For years, this created the perception that biometric security was nearly impossible to bypass.

However, stronger than a weak password does not mean impossible to compromise.

The threat landscape has changed dramatically. The central question is no longer whether fingerprints or facial features are unique. The real concern is whether the systems that store, process, and verify biometric data can withstand modern attacks involving AI-generated spoofing, synthetic identities, and deepfake technology.

Recent incidents and research suggest that many biometric systems are more vulnerable than previously assumed.

Why Biometric Data Creates Permanent Security Risks

One of the biggest differences between biometric authentication and traditional credentials is permanence. A password can be reset after a breach. A fingerprint, face, or iris scan cannot realistically be replaced.

This transforms biometric breaches into long-term security liabilities.

The 2015 breach of the US Office of Personnel Management (OPM) exposed fingerprint records belonging to 5.6 million federal employees, contractors, and government clearance holders. The attack formed part of a wider breach affecting 21.5 million individuals. Years later, those fingerprints remain permanently compromised because biometric identifiers cannot be rotated or reissued like passwords.

The same issue emerged during the BioStar 2 breach in 2019. Researchers discovered that Suprema’s biometric platform exposed 27.8 million records, including more than one million unencrypted fingerprints and facial recognition records. The platform was used by banks, police organisations, and defence institutions across 83 countries. Unlike conventional credentials, these biometric records cannot simply be invalidated after exposure.

This is one of the most overlooked biometric authentication risks. Once biometric data is stolen, the exposure may remain relevant indefinitely.

How AI and Deepfakes Are Changing Biometric Spoofing

Artificial intelligence is accelerating the scale and sophistication of biometric spoofing attacks.

In 2024, researchers demonstrated that AI-generated “master fingerprints” could unlock one in five fingerprint scanners tested. These synthetic fingerprints were not copied from real individuals. Instead, AI systems generated fingerprints capable of matching enough characteristics to bypass authentication systems. Research referenced by SEON also highlighted how deepfake attacks and biometric spoofing techniques are evolving rapidly.

At the same time, European security researchers demonstrated that deepfake faces could bypass certain facial recognition systems with success rates exceeding 80 percent. Real-time deepfake video attacks also increased dramatically during 2024, highlighting how quickly AI-generated impersonation techniques are evolving.

These attacks are no longer theoretical. They are already affecting real organisations.

One of the clearest examples was the January 2024 deepfake attack targeting engineering firm Arup. A finance employee joined a video conference with what appeared to be the company’s CFO and several senior colleagues. The voices sounded authentic, the faces appeared legitimate, and the interaction felt credible. Convinced by the realism of the call, the employee authorised a transfer of $25.6 million.

Every participant in the meeting was an AI-generated deepfake.

The attackers reportedly trained their models using publicly available conference recordings and LinkedIn videos. No internal systems were hacked. No firewall was bypassed. The attack succeeded because the employee trusted facial and voice identity verification as proof of legitimacy.

This case demonstrated how deepfake attacks can weaponise biometric trust itself.

International Security Standards Now Recognise Biometric Spoofing

The cybersecurity industry is increasingly acknowledging that biometric spoofing has become a serious and operational threat category.

The 2024 revision of ISO/IEC 30107-3, the international benchmark for biometric Presentation Attack Detection, now includes testing requirements for AI-generated spoofing techniques and synthetic media attacks.

This shift is significant because it confirms that biometric vulnerabilities are no longer viewed as hypothetical risks. International standards bodies are actively adapting security frameworks to address AI-generated impersonation and biometric spoofing.

How to Use Biometrics Securely

None of these risks mean biometric authentication should be abandoned entirely. Biometrics still provide meaningful security benefits when used correctly within a layered security strategy.

The most important principle is that biometrics should never function as the sole method of authentication. Multi-factor authentication remains critical because it prevents attackers from relying on a single compromised verification method. Combining fingerprints or facial recognition with PINs or additional verification layers significantly reduces risk.

Organisations should also avoid relying exclusively on video or voice confirmation for sensitive financial transactions or high-value approvals. The Arup deepfake incident demonstrated that visual authenticity alone is no longer sufficient proof of identity.

Biometric data itself should also be treated as highly sensitive information. Before implementing biometric systems, organisations need to understand where biometric templates are stored, how they are protected, and what procedures exist in the event of a breach.

Modern biometric systems increasingly incorporate liveness detection and additional anti-spoofing protections designed to make biometric attacks more difficult.

The Future of Biometric Security

Biometric authentication is not disappearing. In many environments, it remains faster, more convenient, and more secure than weak password-based systems. However, the belief that biometrics are inherently unhackable is becoming increasingly disconnected from reality.

Deepfake attacks, biometric data breaches, AI-generated spoofing, and synthetic identity fraud are exposing weaknesses that are becoming significantly more scalable and difficult to detect. The challenge facing organisations is no longer simply protecting systems from intrusion. It is learning how to verify identity in a world where faces, voices, and fingerprints can be manipulated convincingly by artificial intelligence.

Biometrics still have value as part of a modern cybersecurity strategy, but they should be treated as one security layer rather than definitive proof of identity. As biometric spoofing techniques continue to evolve, organisations that rely too heavily on facial recognition or fingerprint authentication without additional safeguards may discover that convenience and security are not always the same thing.

At Defendis, we help organisations detect, monitor, and respond to modern cyber threats by providing actionable threat intelligence and external attack surface visibility before attackers can exploit critical weaknesses. Get in touch with us to schedule a demo and see how Defendis can help strengthen your cybersecurity posture.

About the author
Marouane Sabri is the Co-Founder and Chief Marketing Officer of Defendis. With a background in communications and digital strategy, he leads Defendis’ market expansion.

Related Articles

Discover simplified
Cyber Risk Management
Request access and learn how we can help you prevent cyberattacks proactively.