English
English
Facial biometric attacks
Facial biometric attacks
Facial biometric attacks

Facial biometric attacks: the most common types of fraud

Jan 31, 2024

The fast development of new facial biometric technologies keep changing our daily lives, with both good and bad sides. Indeed, frauds are multiplying, but are we really informed about the modus operandi ? Not for sure.

Now that you've got a basic understanding of facial biometrics, we're going to take a look at the various possible attacks in this second episode.

For decades, facial biometrics and possible technics for circumventing the identity verification system were pure science fiction. Today, these technologies are no longer limited to blockbusters. The risks of loopholes are very real, and you need to be able to identify them in order to counter them. To put this in perspective, the ID company ID.me recorded over 80,000 identity fraud attempts in the USA between June 2020 and January 2021.

Fraud has become increasingly common since the expansion of various online services that have become an integral part of our daily lives. These phenomena have prompted the rise of powerful, tailored solutions to reinforce the security of the people on online platforms.

Find out in this article the three most common facial biometrics attacks, as well as those emerging, creating new challenges for facial recognition solution providers.

Facial biometric attacks: identity theft by presentation

One of the main attack is called a "presentation attack". It is defined as a presentation to the biometric capture sub-system with the aim of interfering with the overall verification operation of the system.

These threats to biometric systems can take many forms such as duplicated or falsified biometric characteristics based on data :

  • directly accessible to the public, such as images from social networks.

  • illegally acquired on black markets

This type of attack is most often used to retrieve or access sensitive personal data. We have divided them into three levels:

Level 1 attack: on-screen photo presentation

biométrie faciale 1

The first level of attack is the use of a photo on a phone screen which is presented to the camera when the video selfie is taken. The fraudster uses it to try and fool the facial recognition algorithms by impersonating a person.

Level 2 attack: 2D or 3D paper mask presentation

Biométrie faciale 2

Based on the same principle as the previous one, the second level of attack is the use of a 2D/3D paper mask presented to the camera. Considered the least sophisticated attack due to its simplicity, it nevertheless achieves a good percentage of success against less robust systems.

Level 3 attack: hyperrealistic 3D masks

Biométrie faciale 3

The third level of attack is the use of a 3D mask - made from various materials such as silicone or latex, which can cost from a hundred to a few thousand euros to design.. In this particular case, the fraudster has succeeded in capturing a 3D image of the person and reproducing it on an ultra-realistic mask.

But how can we be sure that current biometric identification and authentication methods are capable of verifying an identity and thus detecting these presentation attacks?

If the biometric attributes (duplicated or falsified by the fraudster) are of high quality, facial comparison algorithms will logically determine that there is a match with the reference identity. The challenge here is to verify whether the presumed user is actually physically present in front of the camera, or whether he or she is a fraudster trying to simulate the presence of a person whose identity he or she wants to impersonate. This is where liveness detection algorithms come into play!

Also known as anti-spoofing techniques, these liveness detection algorithms can be:

  • active: they require the user to perform some random movement in front of the camera (such as blinking or moving the head, etc...)

  • passive: the user is asked not to move his or her face or make any movement in front of the camera.

To guarantee the highest level of security, suppliers of facial biometric solutions subject their algorithms to testing. These tests are carried out by specialized independent laboratories, in accordance with the ISO/IEC 30107-3 standard for the detection of attacks against biometric presentations (PAD). The aim is to evaluate these solutions in terms of passive detection of living organisms and the reliability of their algorithms regarding the fraud.

The effectiveness of these presentation attacks has diminished as facial biometrics solutions have improved their methods of countering them. At the same time, fraudsters are developing new, more sophisticated methods of attack, making it all the more crucial for R&D teams in the biometrics sector to keep a constant watch and implement appropriate countermeasures.

Facial biometric attacks: new techniques to counter the development of increasingly robust technologies

A- Video injection as an indirect biometric attack: the new danger of fraud

Video injection is the computerized presentation of a video, or short-circuiting the flow of the physical camera by replacing it with that of a virtual camera. The web page will then retrieve this stream or that of a video (which may have been pre-recorded or digitally modified) instead of the one taken live.

Video injection, which has been developed over the last few years, is considered by market players to be the most advanced method of attack. They are a form of bypassing the initial device, as the video is injected and not physically presented to the camera. This means they go beyond the scope of most conventional PAD (Presentation Attack Detection) tests.

Naturally, biometric solution providers are gradually implementing anti-injection measures to counter these new attacks, which are gaining in importance in today's fraud landscape. The measures implemented by Unissey ensure two things in particular:

  • that the video stream comes from the physical camera

  • that the video is actually taken live at the time it is requested (with a verification of the environment).

With the ability to block video injection, Unissey is also capable of blocking deepfakes, which are a sub-category of injection videos. However, deepfakes are currently the pet peeve of facial biometric technologies.

B- Increasingly sophisticated deepfakes...

Deepfake software can create a video or synthetic image that realistically represents anyone in the world, even if they've never actually performed that action or uttered that phrase. Powerful video processing and augmented reality leveraging machine learning have made it possible to create deep imitations of anyone, without the need for in-depth knowledge of video production and special effects.

The story of deepfakes began in 2014 with engineer Ian Goodfellow. While at Stanford University, he investigated the possibilities of using artificial neurons to create artificial videos from images of real people. It was in 2017 that this technique became widespread, when a Reddit user nicknamed "deepfake" posted several erotic videos on the social network, where the faces of actresses were replaced by the faces of celebrities. The publication caused quite a stir, and the author's nickname christened this new phenomenon.

Deepfakes can also affect different areas of biometric technology use. For example, a notorious fraud with a facial recognition system in 2021 was revealed in China. In 2018, two friends bought high-resolution photos on the online black market and "boosted" them using deepfake applications. Then, they bought several smartphones with reflash, which allows a prepared video to be used during identification, rather than an image from the front camera. Using this scheme, the fraudsters managed to cheat the Chinese tax department's identity verification system for two years and exchange false tax invoices. The damage caused to the Chinese Treasury exceeded 76 million dollars.

Biométrie faciale 4

This type of attack is particularly used against identification systems based on video evidence. If the attacker knows all the steps in the process and can inject the video or present it on a screen, he can fool both an automated system and a system that uses automatic identification technology.

However, it should be noted that deepfake attacks have only two ways of penetrating the system: either by being presented to the camera, or by being directly injected into the camera stream.

Nevertheless, this technology still has restrictions that can only be removed with careful manual processing. It's also important to understand that all false face synthesis algorithms are based on the transformation of two-dimensional images. Additional control techniques that detect so-called unavoidable digital artifacts can be used to recognize deepfakes. For example, a fake face often has different shades of eye color or the distance between the center of the pupil and the edge of the iris.

C- ... which are struggling to keep pace with the development of biometric technologies

The good news is that modern biometric solutions can not only analyze camera video streams in real time, but also archive video files, including the identification of potential deepfakes on recordings.

The fight against deepfakes is a serious challenge, the answer to which will be an even more dynamic development of technologies. The more sensitive and profound facial recognition algorithms become, the more sophisticated the attempts of fraudsters to circumvent them. However, each time, it becomes more and more difficult for deepfakes creators to look for "gaps" in biometric algorithms.

According to forecasts, by 2024, the accuracy of deepfakes identification by content recognition solutions will reach 70%, and by 2030 - 90%. This means that, in the near future, the opportunity to create deepfakes to fool biometric algorithms will be greatly questioned, as this process will become time-consuming and complex.