Compliance with anti-money laundering (AML) and know-your-customer (KYC) laws and regulations is often a top priority for digital asset companies. Despite efforts to monitor and detect fraudulent activity, digital asset companies face significant challenges due to advances in technological threats such as generative AI.
Gen AI can create highly realistic deepfakes and fake documents, for example by flooding a person’s social media accounts with fake posts, almost instantly creating a convincing life story to support the false information. For example, fraudsters recently used deepfake technology to simulate a video conference involving the CFO and other executives of a multinational financial company, tricking employees into transferring approximately $26 million to the fraudsters.
This means that next-generation AI’s ability to generate compelling deepfake audio and video almost instantly could wreak havoc on existing governance systems designed to protect consumers.
Current KYC mechanisms are insufficient for the advancement of Gen AI
AML and KYC programs are essential for financial institutions to verify the identities of their customers and comply with laws enacted to combat money laundering, fraud, and terrorist financing. However, many cryptocurrency companies have weak or inadequate KYC controls, increasing the risk of fraud. According to Coindesk, cryptocurrency users lost approximately $4 billion to “fraud, unauthorized access, and hacks” in 2022 and approximately $2 billion in 2023.
Digital asset companies generally do not have the same physical presence as traditional financial institutions and therefore must utilize KYC methods suitable for a remote environment. Commonly used KYC verification methods include:
This could involve taking a selfie holding a handwritten sign with the current date on it, taking a photo of the user’s driver’s license or other government-issued identification, or recording a live video of the user answering security questions to verify their identity and “liveness.”
But gen AI can circumvent these current verification methods. For example, services like OnlyFake use AI to create fake identities that allegedly pass the strict KYC checks of major cryptocurrency exchanges like Binance and Coinbase. These fake identities are generated using neural networks and can be purchased for as little as $15. Deepfake Offensive Toolkit, or dot, creates deepfakes for virtual camera injection, allowing users to swap their face for an AI-generated one to pass identity verification. According to this article in the Verge, financial institutions’ KYC identity tests, which typically require users to look into their phone or laptop camera, can easily be fooled by the deepfakes generated by dot.
Combining Gen AI with blockchain can help reduce fraudulent activity by Gen AI.
Blockchain and AI are complementary technologies that can be effective, alone or in combination, in detecting and investigating fraud.
Blockchain for Verification
Decentralization, immutability, and rules-based consensus are some of the core features of blockchain technology, which help with identity verification and fraud detection. For example, transactions written to the blockchain are immutable (i.e. data cannot be deleted or modified), preventing fraudsters from modifying transaction data. Additionally, transactions written to a public blockchain, such as the Bitcoin blockchain, are fully searchable and transparent, making it difficult for fraudulent activity to go undetected. Blockchains are also inherently decentralized, making it more difficult for a single entity or a small number of entities to make unauthorized changes to data on the blockchain. Finally, data on a blockchain can be cryptographically hashed, which generates a unique digital fingerprint that is nearly impossible to reproduce. This feature helps track fraudulent transactions, because if someone tampers with data on the blockchain, the hash value will also change.
AI for detection:
AI can enhance fraud detection by analyzing user behavior patterns and identifying anomalies in real time. In contrast to blockchain technology, which helps audit past transactions, AI can learn and adapt to potentially fraudulent behavior in real time. For example, advanced AI detection algorithms can analyze user behavior patterns to identify anomalies and flag suspicious activity that deviates from normal usage. AI can quickly sift through large amounts of data and identify subtle inconsistencies that humans cannot detect. Machine learning models and AI-driven behavioral analytics allow AI to analyze user interactions such as mouse movement patterns and typing styles, thus adding a layer of identity verification on top of the blockchain. The ability of AI to proactively monitor and detect fraud and the ability of blockchain to authenticate users’ identities and the validity of transactions is a powerful combination.
Advances in next-generation AI mean urgent need for solutions
As AI deepfakes become more believable and realistic, crypto-related cybercrime will only increase. But in the face of this growing threat, several startups have developed AI-centric blockchain tools to combat fraud and other illicit activity in the digital asset industry.
For example, BlockTrace and AnChain.AI are two companies leveraging the synergy of blockchain and AI technologies to fight crypto-related crimes. BlockTrace, whose mission is to help governments and private enterprises fight crypto-related financial crimes, recently partnered with AnChain.AI, a company that uses AI capabilities to fight digital asset fraud, scams, and financial crimes. BlockTrace and AnChain.AI provide solutions that enable national security agencies to use AI to investigate smart contracts, conduct intelligence on blockchain transactions, and provide cybersecurity insights to national security officials.
The industry is on the verge of harnessing the full potential of AI and blockchain to combat AI-enabled fraud, and given that AI is advancing at breakneck speed, there will be many more developments to come.