Security News > 2024 > February > Rise of deepfake threats means biometric security measures won't be enough
Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections.
Remote account recovery, for example, might rely on an image of the individual's face to unlock security.
Since these could be beaten by images copied from social media and other sources, security systems employed "Liveness detection" to see if the request was from the right individual.
These approaches could now be duped by AI deepfakes and need to be supplemented by additional layers of security, Gartner's VP Analyst Akif Khan told The Register.
Security system developers are also trying to use AI - typically deep neural networks - to inspect the presented images to look for signs that they are deepfakes.
Organizations should use both approaches to defend against deepfake threats to biometric security, he said.
News URL
https://go.theregister.com/feed/www.theregister.com/2024/02/01/deepfake_threat_biometrics/
Related news
- AWS security essentials for managing compliance, data protection, and threat detection (source)
- Privileged Accounts, Hidden Threats: Why Privileged Access Security Must Be a Top Priority (source)
- MUT-1244 targeting security researchers, red teamers, and threat actors (source)
- Deloitte says cyberattack on Rhode Island benefits portal carries 'major security threat' (source)
- Are threat feeds masking your biggest security blind spot? (source)
- Week in review: MUT-1244 targets both security workers and threat actors, Kali Linux 2024.4 released (source)