Two new research papers have shed light on sophisticated fraud attempts targeting biometric systems through the use of AI, specifically addressing morph attacks and template inversion attacks. A team from Darmstadt University of Applied Sciences, often referred to as Germany’s Silicon Valley, released a study on Morphing Attack Detection (MAD). This research proposes two methods based on transfer-transfer techniques for generating digital print-scan face images, which are then used to train MAD algorithms. The proposed method achieved an Equal Error Rate (EER) of 3.84 percent and 1.92 percent on the FRGC/FERET database when synthetic texture-transfer print-scan images were used at 600 dpi.
This paper was authored by researchers at the Biometrics and Internet Security Research Group and published on ArXiv, highlighting significant advancements in combating morph attacks, where fraudulent actors blend two images to deceive biometric systems.
In a separate study, researchers published in the IEEE Transactions on Biometrics, Behavior, and Identity Science proposed a novel method for template inversion attacks against facial recognition systems using synthetic data. One of the authors, Sebastien Marcel, who heads the Biometrics Security and Privacy group at the Idiap Research Institute in Switzerland, noted that their model could reconstruct face images from templates derived from real face images. This approach outperformed previous methods on four datasets: MOBIO, LFW, AgeDB, and IJB-C. Additionally, it yielded high-resolution 2D face reconstructions and competitive results with state-of-the-art (SOTA) face reconstruction methods.
The study also tested the generated face images in practical presentation attacks against facial recognition systems. Results, along with materials to reproduce the findings, are available on Gitlab. Idiap has been heavily involved in template inversion attacks, while research on face morphing attacks continues, such as through the European Commission’s iMars project. These studies represent ongoing efforts to strengthen biometric security against increasingly sophisticated AI-driven fraud attempts.