Artificial intelligence-generated deepfakes threaten to erode the trust in identity verification processes and authentication technologies. Gartner predicts that by 2026, 30% of organizations will harbor skepticism due to the rising influence of AI-generated deepfakes in face biometrics. “In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, vice president and analyst at Gartner.

AI-Generated Deepfakes Challenge Current Identity Verification Processes

Identity verification processes depend heavily on presentation attack detection (PAD) to gauge user liveness. However, Gartner experts argue that existing standards and testing methodologies for PAD mechanisms fall short of addressing digital injection attacks facilitated by AI-generated deepfakes, these synthetic images, known as deepfakes, represent a significant shift, enabling malicious actors to exploit vulnerabilities in biometric authentication.

Gartner’s research highlights a 200% surge in injection attacks in 2023, marking a significant increase in the prevalence of this attack vector. As a response, organizations must adopt a multifaceted approach, combining PAD, injection attack detection, and image inspection to thwart these evolving threats. To understand the challenges posed by AI-generated deepfakes, chief information security officers and risk management leaders are advised to select vendors with capabilities exceeding current standards; this involves monitoring, classifying, and quantifying new types of attacks, going beyond the conventional methods employed in identity verification and authentication processes.

Strategies for Enhanced Security

Gartner emphasizes the importance of organizations working collaboratively with vendors dedicated to mitigating deepfake-based threats. The collaboration should involve defining a minimum baseline of controls incorporating technologies like identity assertion detection (IAD) coupled with image inspection can provide a robust defense against evolving deepfake threats.

Beyond the baseline, security leaders are urged to include additional risk and recognition signals, such as device identification and behavioral analytics, to enhance the detection capabilities against deepfake attacks. Gartner’s recommendations underline the need for proactive measures to fortify identity verification processes and prevent account takeovers in the face of advancing AI-driven threats.

Read more: Deepfakes Are Biggest AI-Related Threat, Says Microsoft President


Please enter your comment!
Please enter your name here