28.1 C
New York

‘Deepfakes to make biometrics unreliable’

Published:

BENGALURU: By 2026, attacks using AI-generated

deepfakes on face biometrics

will mean that 30% of enterprises will no longer consider such

identity verification

and

authentication

solutions to be

reliable

in isolation, according to research advisory firm Gartner.
“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by

malicious actors

to undermine biometric authentication or render it inefficient,” said Akif Khan, VP (analyst) at Gartner. “As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the

face

of the person being verified is a live person or a deepfake.”
The proliferation of

deepfakes

has emerged as a big menace. Identity verification and authentication processes using face

biometrics

today rely on presentation attack detection (PAD) to assess the user’s liveness. PAD leverages software and hardware to combat biometric fraud. “Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the

AI-generated

deepfakes that can be created today,” said Khan.
Gartner research said presentation attacks are the most common attack vector, but injection attacks increased 200% in 2023. Preventing such attacks will require a combination of PAD, injection attack detection (IAD) and image inspection.

To assist organisations in protecting themselves against AI-generated deepfakes beyond face biometrics, chief information security officers and risk management leaders must choose vendors who can demonstrate they have the capabilities and a plan that goes beyond current standards and who are monitoring, classifying and quantifying these new types of attacks.

Related articles

Recent articles

spot_img