A New Level of Risk in Crypto Fraud: The ProKYC Deepfake Tool
Cato Networks has reported a concerning advancement in crypto fraud techniques with the emergence of a new AI-driven deepfake tool known as ProKYC.
This tool enables malicious actors to circumvent stringent Know Your Customer (KYC) measures employed by cryptocurrency exchanges, showcasing a remarkable evolution from traditional fraud methods.
Source: Cato Networks
Etay Maor, the chief security strategist at Cato Networks, stated that this innovation marks a significant enhancement over previous approaches, where cybercriminals typically resorted to purchasing forged identification documents from the dark web.
How Does ProKYC Work?
The ProKYC platform allows fraudsters to generate entirely fabricated identities without the need for any existing credentials.
Creating a fake passport of preferred choice by generating a fake identity. (Source: Cato Networks)
This capability specifically targets financial institutions with KYC protocols that require matching webcam images of users to their official government-issued identification, such as passports or driver’s licences.
A video demonstration from ProKYC illustrates how easily a user can create a convincing AI-generated face, integrate it into a template of an Australian passport, and subsequently produce a deepfake video that successfully passes facial recognition tests on major crypto exchanges like Bybit.
Generating an address (Source: Cato Networks)
In this demo, the user first crafts a lifelike AI-generated face and embeds it into the passport template.
Source: Cato Networks
Following this, the ProKYC tool creates a deepfake video that features the synthetic individual, allowing the user to bypass KYC checks designed to prevent such fraudulent activity.
The software even goes so far as to ask which platform the AI video KYC is being created for, with Binance, Bybit, Coinlist, Okex (now known as OKX), Gate.io and Huobi listed as options. (Source: Cato Networks)
The AI video generated by the software appears astonishingly realistic, almost as if it blurs the line between reality and fiction.
Maor cautioned,
“Creating biometric authentication systems that are super restrictive can result in many false-positive alerts. On the other hand, lax controls can result in fraud.”
The software connects to the camera, enabling the AI-generated deepfake video to be utilised in the KYC verification process, which is alarming given that such technology should never pass through the KYC system undetected. (Source: Cato Networks)
The Rise of New Account Fraud (NAF)
Cato Networks highlights that tools like ProKYC significantly enhance the capabilities of cybercriminals, enabling them to create fraudulent accounts on crypto exchanges—a practice referred to as New Account Fraud (NAF).
This type of fraud not only facilitates the creation of synthetic accounts but also serves as a means for laundering money and establishing mule accounts.
The financial impact of such schemes is staggering; according to the AARP, NAF accounted for over $5.3 billion in losses in 2023, a notable increase from $3.9 billion the previous year.
The ProKYC website offers a subscription package priced at $629, which includes a camera, a virtual emulator, facial animation tools, and the ability to generate verification photos.
Beyond crypto exchanges, the tool claims compatibility with various payment platforms, such as Stripe and Revolut, expanding its potential misuse.
Addressing the Challenge of Detection
Detecting and preventing these advanced AI-driven fraud tactics presents considerable challenges.
As Maor pointed out, overly stringent systems could trigger false positives, complicating the balance between security and usability.
“Creating biometric authentication systems that are super restrictive can result in many false-positive alerts,” he reiterated.
However, there are still potential methods for identifying these AI-generated threats.
Some detection strategies rely on human analysts to spot high-quality images and videos, along with any discrepancies in facial movements or image consistency.
The identification of deepfake content is critical, as sophisticated forgeries can easily deceive automated systems.
Indicators such as unusually high-resolution images, glitches in facial movements, or inconsistencies in eye and lip synchronisation can signal potential fraud.
Human intervention remains vital for confirming the authenticity of such submissions.
The Future of AI and Cybersecurity
The rapid evolution of AI technologies has created a landscape where threat actors are continually refining their methods.
Cato CTRL emphasises the importance of organisations staying informed about emerging threats and adapting their security measures accordingly.
The recommendations include gathering intelligence from various sources, including human intelligence (HUMINT) and open-source intelligence (OSINT), to remain vigilant against the latest cybercrime trends.
As the landscape of cyber threats continues to evolve, the sophistication displayed by tools like ProKYC highlights the urgent need for enhanced security measures within financial institutions.
The struggle against such advanced forms of fraud is ongoing, and proactive efforts are necessary to mitigate these risks effectively.