Dataspike - Catch AI if you can: compliance and deepfakes

AI.jpg

Generative AI can be “good” or “evil”

The recent advances in generative artificial intelligence have a great potential to change everyday life. In a few months since its release, ChatGPT-3 fascinated the tech-savvy community with its ability to assist humans in content creation and consumption. Several years ago, deepfakes hit the mainstream and now slowly progress toward professional use in CGI. Text-to-image generation with DALL-E and other implementations of creative AIs are fun.

Current mainstream generative AI isn’t to be overestimated. It’s suitable for writing plausible-looking texts, generating fancy profile pictures, or adding celebrities’ faces to M-rated videos. Yet fraudsters can effectively use it to cause chaos. For instance, ChatGPT makes good phishing emails or malicious code if one asks properly. The “Vice” journalist Joseph Cox bypassed the voice ID on his bank account using an AI-powered replica of his voice. Finally, deepfakes put the standard eKYC technologies to a stress test.

Electronic KYC is vulnerable to deepfakes

In the pre-digital era, KYC was a time-consuming and less-than-convenient process that included live appointments and lots of paperwork. Some consulting firms even specialized in checking customer’s name, address, and other information against public databases, but it was slow and not-so-reliable. This approach became obsolete in the mid-2000s with the rapid growth of fintech, retail investing, online gambling, and other regulated markets.

Modern electronic KYC process doesn’t rely on detailed checks but on computer vision technologies that can confirm that the person in front of a camera is real and corresponds with the documents provided. The steps of the digital-first KYC journey include ID verification face matching between the ID and a selfie photo and a liveness test that requires a user to perform simple actions, such as turning his head or smiling. KYC services providers also check the customer’s name and surname against various PEP and sanctions lists and legal databases, but that’s all.

This is the weak spot of the modern KYC process. A couple-year-old Coindesk investigation has identified numerous Telegram channels selling verified accounts at major crypto exchanges and payment service providers (something you actually expect to find on the deep dark web and not a popular messaging app). Such vendors typically offer many hijacked accounts. However, the most problematic part is that some accounts available for a mere $200 were created using fully synthetical data—with fake documents and deepfake to pass selfie and liveness tests.

A more recent research by the developers of ‘The Deepake Offensive Toolkit’ (a specialized penetration testing tool) confirmed that many existing eKYC providers fail to identify deepfakes, raising a question about the reliability of automated KYC as a whole.

But it improves every day in many ways

There are several reasons electronic KYC is, in fact, reliable enough.

First, deepfake detections progress along with the deepfake generation.

Both creation and identification of deepfakes (and any other computer-imagined objects trying to mimic the real world) are based on the same concept of generative adversarial networks (GANs). Any GAN is built upon two AIs: a generator that constantly tries to create something (for instance, a realistic human face) and a discriminator that aims to distinguish it from a real object. This way system learns and achieves slightly better results each execution cycle.

When Google had to deal with the increasing volume of machine-generated content on the web, it used neural text generators to build effective human vs. machine discriminators. Deepfakes detection follows the same path, learning from deepfakes generation—but slightly lags behind because it also learns to prevent false positives.

Second, some governments understand their role in customer due diligence.

As more governments go through digital transformation to reduce bureaucracy, improve transparency, and automate public services, they create opportunities for improvement in critical areas, such as preventing money laundering, organized crime, and terrorism financing. For instance, eIDAS dramatically reduced the compliance burden for companies and nonprofits in the EU with a framework for remote ID verification and KYC data portability.