In 2025, Shengliang Lu et al. published "AI Applications in Web3 SupTech and RegTech: A Regulatory Perspective," which noted that the digital landscape is undergoing a transformative transformation driven by the rise of Web3 technologies and virtual assets. This new era of internet technology leverages distributed ledger technology and smart contracts, simultaneously promoting decentralization, increasing transparency, and reducing reliance on intermediaries. These innovations have been crucial in shaping decentralized finance (DeFi). However, the rapid adoption of Web3 technologies also carries significant risks, highlighted by a series of high-profile failures and systemic vulnerabilities. The Abu Dhabi Global Market (ADGM), through its Financial Services Regulatory Authority (FSRA), has established a transparent and advanced regulatory framework aligned with international standards, fostering a supportive regulatory environment and safeguarding the interests of stakeholders. This white paper explores the integration of artificial intelligence (AI) into regulatory technology to enhance compliance monitoring and risk management. This white paper details the research and development work of the Asian Institute of Digital Finance at the National University of Singapore, the Financial Services Authority of ADGM, and the ADGM Academy Research Center. It concludes with a summary of key findings and suggests future collaborations to further improve the regulatory landscape. The core research sections were compiled by the Institute of Fintech at Renmin University of China. 1. Introduction With Web3 technologies leading the advancement of internet technology, the digital landscape is undergoing rapid transformation. Built on distributed ledger technology (DLT) and smart contracts, Web3 emphasizes decentralization, increased transparency, and reduced reliance on intermediaries. Distributed ledger technologies, including blockchain, provide secure, immutable records of transactions and data, while smart contracts facilitate automated agreements without intermediaries. This combination supports the growth of decentralized applications (dApps), particularly in the decentralized finance (DeFi) sector, which are reshaping financial transactions through peer-to-peer interactions. The global cryptocurrency market capitalization has surpassed the $3 trillion mark, rivaling some of the world's largest companies, including Apple and Microsoft. The cryptocurrency user base has expanded significantly, growing by 34% in 2023 alone, from 432 million in January to 580 million in December. This growth highlights the growing adoption and integration of cryptocurrencies into the global financial landscape. Furthermore, data shows that the United Arab Emirates (UAE) leads the world in cryptocurrency adoption, with over 30% of its population (approximately 3 million people) owning digital assets. This reflects the country's forward-thinking embrace of fintech and its ambition to become a leading fintech hub. ADGM plays a key role in this rapidly evolving financial landscape. As the authority overseeing financial services in the international financial center and free zone, the ADGM Financial Services Regulatory Authority (FSRA) has been at the forefront of fostering a regulatory environment that supports not only the growth of DeFi and virtual assets (VAs), but also the broader digital transformation of the financial services sector. Since its launch in 2018, the FSRA has established and continuously enhanced a comprehensive regulatory framework for virtual assets. This framework supports innovation while ensuring strong oversight and alignment with international standards. Embracing digital transformation, ADGM has worked closely with technology ecosystem partners such as Hub71 and research institutions like the National University of Singapore to promote the adoption of cutting-edge technology solutions within ADGM. This proactive approach has helped position Abu Dhabi as a preferred destination for financial firms seeking to leverage advanced technologies and digital financial models. To further enhance its regulatory capabilities, the ADGM Financial Services Regulatory Authority is leveraging advances in Regulatory Technology (RegTech) and Supervisory Technology (SupTech) to streamline regulatory and supervisory processes. Through AI-powered RegTech solutions, the FSRA can provide more interactive and customized supervisory interactions, making compliance more efficient and convenient for entities operating within ADGM. Implementing AI-powered RegTech tools helps support the FSRA's supervisory and risk management objectives while reducing costs for financial institutions. Together, these initiatives underscore the FSRA's mission to provide a transparent, efficient, and advanced financial environment that not only protects the interests of clients, investors, and industry participants, but also fosters sustainable growth and innovation in ADGM. Supervisory Technology (SupTech) refers to the application of technology to enhance the supervisory and inspectorate functions of regulatory authorities. It involves the use of advanced tools such as data analytics, artificial intelligence, and automation to improve the monitoring and oversight of regulated activities and the enforcement of the regulatory framework. SupTech aims to provide regulators with more effective, data-driven insights, enabling them to better identify issues, assess risks, and enforce regulations in real time. RegTech refers to the use of technology to streamline, automate, and improve regulatory compliance processes for businesses. It leverages innovative tools such as artificial intelligence, machine learning, automation, and data analytics to help firms meet regulatory requirements more efficiently, reduce compliance costs, and enhance transparency and reporting quality. RegTech aims to simplify complex compliance tasks such as monitoring transactions, identifying risks, and ensuring adherence to legal standards. Emerging risks arising from the characteristics of Web3 technologies, such as the failure of blockchain protocols like Terra (LUNA) and emerging vulnerabilities in smart contracts, highlight the need for effective regulatory frameworks and risk management strategies. The innovative and decentralized nature of blockchain technology creates a breeding ground for new types of fraud and systemic failures, which must be addressed for wider adoption. As part of this response, ADGM is exploring the application of artificial intelligence (AI) in regulatory and supervisory technology solutions to improve compliance monitoring and risk management. The National University of Singapore's Asian Institute for Digital Finance (NUS AIDF) conducts fintech research in the area of AI technologies, developing tools for predictive analytics, anomaly detection, and automated compliance. The FSRA is testing and validating these AI technologies to address the emerging needs for effective regulation and supervision of the Web3 and virtual asset ecosystems. This white paper summarizes the research and development work of NUS AIDF and ADGM (including the FSRA and the ADGM Academy Research Centre) on the application of AI technologies to support regulatory and supervisory activities in the Web3 and virtual asset sectors. Because this article is intended for a broader audience and does not aim to provide specific definitions, readers should note that the terms "virtual assets," "Web3," "blockchain," "DLT," and "network" are used interchangeably throughout. Nevertheless, some of the terms are explained in Section 2. The remainder of the article is structured as follows. Section 2 provides the background and scope of this article, while Section 3 discusses potential opportunities for regulators to leverage AI technologies. Section 4 explores AI innovations that are shaping regulatory actions and activities. Section 5 examines pilot projects conducted by NUS AIDF and ADGM, showcasing practical applications of these innovations, such as smart contract assessments, security audits, and AI-driven due diligence. Section 6 concludes the article, summarizing the findings and exploring future directions and potential areas for strengthening the regulatory landscape.
2. Background
This section aims to explain the key terms used in this article and lay the foundation for readers to better understand the discussion in subsequent sections.
Virtual Assets. The FSRA's regulatory framework divides digital assets into different categories, which also include fiat reference tokens and digital securities. A virtual asset is a digital representation of value that can be traded digitally and used as (1) a medium of exchange; and/or (2) a unit of account; and/or (3) a store of value, but does not have legal tender status in any jurisdiction. Virtual assets (a) are neither issued nor backed by any jurisdiction and their above functions are only achieved through agreements within the virtual asset user community; and (b) are different from legal tender and electronic money. Web3 represents the next evolution of the internet, transitioning from "read" (Web1) and "read-write" (Web2) to "read-write-own" capabilities. Unlike the centralized platforms of Web2, Web3 leverages blockchain technology to give users true ownership of their data, digital assets, and online interactions. This decentralized paradigm reduces reliance on intermediaries, fosters greater user autonomy and privacy, and redefines how individuals interact with digital platforms. Distributed Ledger Technology (DLT) and Blockchain Networks. DLT is a digital system for recording asset transactions, with data stored simultaneously across multiple sites or nodes. Unlike traditional centralized databases, DLT is decentralized, eliminating the need for a central authority, thereby enhancing transparency and security. Each participant in the network maintains a synchronized copy of the ledger, reducing the risk of single points of failure. Blockchain, a specific type of DLT, organizes data into encrypted blocks, which are then linked chronologically to form a chain. This structure ensures that recorded data is immutable. Virtual assets are typically built on blockchain networks. In Web3, decentralized ledger technology (DLT) and blockchain networks power DeFi platforms and decentralized applications (dApps) by enabling secure and transparent transactions. Decentralized finance (DeFi) refers to a financial ecosystem built on blockchain and DLT that enables peer-to-peer transactions and services without the need for traditional intermediaries such as banks or financial institutions. DeFi applications utilize smart contracts—self-executing programs on blockchain networks—to automate and execute financial operations such as lending, trading, and investing. Artificial intelligence (AI). Broadly speaking, AI defines a collection of technologies that enable machines or systems to understand, learn, act, reason, and perceive in a human-like manner. AI systems leverage algorithms, data, and computing power to continuously adapt and improve. The surge in AI tools in recent years has opened up opportunities for the financial industry to integrate their capabilities into a variety of use cases. Artificial intelligence offers significant benefits, including improved operational efficiency, enhanced regulatory compliance, personalized financial products, and advanced data analytics. Back in 2022, the FSRA launched an initiative called "OpenReg" to make regulatory content machine-readable. This project enables compliance technology firms and the data science community to leverage this AI training ground to build the next generation of AI-powered compliance technology solutions. In this article, as part of the FSRA's ongoing process of integrating AI technologies into its supervisory approach, we detail our practical application of AI for compliance technology and supervisory technology in Web3 supervisory actions/activities. In doing so, we consider valuable insights from a recent report by the Financial Stability Board (FSB), the supervisory principles outlined in the EU's "AI Act," and the risk framework developed by Project MindForge. 3. Opportunities for Leveraging Artificial Intelligence to Regulate Web3 Activities Due to the unique characteristics of blockchain technology, smart contracts, and the speed of Web3 innovation, Web3 regulatory frameworks present some nuances compared to traditional regulations. Globally, recent Web3 regulatory efforts have focused primarily on virtual assets and the platforms on which they are traded. This includes enforcing anti-money laundering (AML) measures, such as integrating Know Your Transaction (KYT) solutions and implementing the Travel Rule; establishing prudential guidance for stablecoin issuers; and, more recently, regulating decentralized, ownerless entities such as DLT foundations and decentralized autonomous organizations (DAOs). These efforts to establish regulatory frameworks and impose safeguards to protect customers and investors demonstrate the growing acceptance of virtual assets and Web3. When examining the inherent characteristics of Web3 and virtual assets from a financial regulator's perspective, the following points must be considered (but are not limited to): They operate continuously 24/7 through self-executing smart contracts on DLT with minimal human oversight; They present heightened security risks due to vulnerabilities in smart contract coding, potential exploits, and reliance on decentralized networks; They introduce "new" concepts that either leverage blockchain innovations to transform existing traditional financial frameworks or propose novel ideas with no historical precedent. The decentralized nature of Web3 ensures the immutability of transactions and smart contracts, enhancing trust and transparency, but also makes it challenging to address errors such as "fat finger" errors, hacking attacks, or unintended consequences. Regulating Web3 activities presents several challenges, necessitating innovative regulatory approaches and the development of new tools to enhance oversight, monitoring, and enforcement. However, these challenges also present significant opportunities to shape a stronger future for the Web3 ecosystem. Fast-paced Innovation and Risk Identification. The innovative nature and rapid pace of Web3 technologies make timely identification and mitigation of emerging risks challenging. This dynamic environment requires a higher degree of responsiveness in regulatory processes and frameworks to ensure regulators remain agile and can effectively identify, assess, and respond to potential risks. Gaps in responsiveness increase the potential for fraud and market failure. However, these regulatory challenges also create opportunities to build frameworks from the ground up, allowing for the integration of forward-looking principles that can be adjusted over time. This can encourage the development of efficient business models adapted to the unique characteristics of Web3, ultimately fostering a stable and vibrant market that both meets regulatory objectives and promotes industry growth. Artificial intelligence can play a role in facilitating the investigation of related issues and the development of regulatory frameworks by quickly identifying areas for improvement in regulatory rulebooks to rapidly respond to Web3 developments. Advanced Real-Time Risk Monitoring. Effective risk monitoring in the Web3 ecosystem requires advanced tools that can analyze massive amounts of blockchain data in real time. Given the 24/7 operation of DLT and smart contracts, traditional point-in-time regulatory approaches often struggle to handle the volume and complexity of transaction data generated. Therefore, regulators urgently need to develop more sophisticated analytical tools. Implementing continuous monitoring systems and automated risk management tools can help monitor regulatory compliance and enable proactive responses to potential threats. Jurisdictional Complexity. The decentralized nature of Web3 activity often presents cross-jurisdictional challenges for regulatory approaches. Because each regulator's approach to virtual asset governance may differ, firms may find it difficult and costly to maintain compliance with multiple, and sometimes conflicting, regulatory requirements, increasing the tendency to engage in regulatory arbitrage. AI-powered compliance technology tools have the potential to help firms streamline and manage these complexities. By automating routine compliance tasks, identifying overlapping regulatory requirements, more efficiently adapting to new rules, and assisting with regulatory reporting, AI can reduce costs and operational burdens, ultimately making it easier for firms to meet diverse regulatory expectations. In the following sections, we will explore the benefits of using AI in regulatory processes across various scenarios. 4. AI Innovation The development of AI technology has undergone significant advancements, transforming the operational and innovation landscape across industries. In the Web3 and virtual asset (VA) sectors, artificial intelligence (AI) has the potential to significantly enhance regulatory oversight and compliance. This section provides an overview of emerging AI technologies and explores how AI innovations may reshape the regulatory landscape for Web3. This section will first briefly introduce widely used AI models (we will only briefly address those with broad regulatory application potential), followed by a discussion of use cases for employing these AI technologies in supervisory activities. We will also discuss the key challenges facing the use of AI before considering possible future developments. 4.1 Emerging AI Technologies Machine Learning (ML). Machine learning is a subset of AI that focuses on making predictions or decisions based on data. Machine learning algorithms excel at analyzing large amounts of transaction data to detect patterns and anomalies that indicate fraudulent activity or compliance issues. By applying supervised, unsupervised, and reinforcement learning techniques, ML models can adapt and improve over time, providing regulators with a powerful tool for improving the efficiency and accuracy of their monitoring without the need for constant human oversight. Natural Language Processing (NLP). Natural language processing (NLP) focuses on enabling computers to understand and process human language (i.e., text). By automatically extracting and analyzing key information from vast amounts of documents and communications, NLP can bring efficiency to regulatory reviews and assessments. Advanced NLP models have made significant progress in understanding and generating human-like text, which can be used to automate responses to inquiries from regulators and the public. However, NLP technology carries the potential risk of misinterpretation and bias, as models may not fully account for context or tone that varies depending on culture or social norms. If these technologies are used without human intervention, such challenges can lead to inaccurate regulatory responses or actions. Generative AI. Generative AI refers to AI technologies that can generate new content (such as text, images, and other media) based on existing data. However, natural language processing technologies carry the potential risk of misinterpretation and bias, as models may not fully account for context or tone that varies depending on culture or social norms. These challenges could lead to inaccurate regulatory responses or actions if these technologies are employed without human intervention. AI Agents. AI Agents are specialized generative AI model implementations capable of performing complex tasks through pre-set workflows, such as automating customer service interactions, generating legal and regulatory documents, and even conducting virtual negotiations on behalf of human operators. Generative AI and AI Agents have many potential applications in the regulatory realm. For example, regulated entities can use them to automatically generate detailed, periodic or on-demand compliance reports. Regulators can also leverage such AI technologies to analyze large volumes of regulatory filing data and generate a shortlist of potential violations and risk indicators. However, similar to the inherent limitations of natural language processing technology, current generative AI models, primarily based on large language models (LLMs), have limitations in the accuracy and reliability of their output due to the potential for "hallucinations" and contextual misunderstandings. Artificial General Intelligence (General AI). General AI refers to highly autonomous systems capable of performing any cognitive task a human can undertake. Unlike generative AI, which is designed for specific content creation tasks, General AI is characterized by its versatility and ability to adapt to a wide range of scenarios without specific prior programming. While still in its conceptual stage, General AI could facilitate highly adaptive regulatory oversight and compliance management systems that can autonomously adapt to new regulations and complex legal compliance requirements with minimal to no human intervention. 4.2 AI Solutions for Web3 Regulation In this section, we explore how different types of AI technologies can be applied in the Web3 regulatory domain to address challenges in monitoring, law enforcement, and compliance management. We categorize these technologies into two main categories: applications using Narrow AI and those using Generative AI. Note that Narrow AI refers to AI systems designed to perform specific tasks and operate within limited constraints. They are also referred to as "specialized AI" or "weak AI." Regulatory reporting tools. AI-powered regulatory reporting tools can automate the collection, submission, and analysis of regulatory filings and certification reports. These systems leverage advanced data mining and processing algorithms to extract and organize information from vast data sets to facilitate seamless regulatory reporting. In addition to reporting automation, AI tools that perform predictive analytics can help regulated entities identify risk factors, thereby reducing potential compliance failures. For example, AI can be used to monitor and predict financial risks that could hinder compliance with liquidity and capital obligations. Risk profiling. Specialized AI systems for risk profiling analyze and categorize virtual assets or financial entities based on their risk characteristics and applicable regulatory requirements. These systems evaluate historical performance, market behavior, and external factors to maintain a dynamic risk profile. By continuously learning from new data and regulatory updates, these AI profiling tools can keep pace with the evolving financial landscape. Know Your Transaction (KYT). Leveraging graph analysis and graph neural networks (GNNs), AI-powered KYT and anomaly detection systems can be specifically designed to monitor and analyze accounts and transactions on blockchain networks. By leveraging AI's ability to examine complex blockchain transaction flows, regulated entities will be better able to identify high-risk transactions and accounts and improve their enforcement of anti-money laundering (AML) requirements. While existing KYT solutions are primarily rule-based, industry participants are integrating AI technologies, such as using pattern recognition for wallet clustering and cross-chain asset flow analysis. Financial Risk Assessment. In traditional finance, AI models are already being used for cash flow forecasting and liquidity management. In DeFi, platform operators and users can employ AI models to more effectively manage liquidity by analyzing and predicting liquidity risks within and across decentralized exchanges and lending platforms. These models can be used to monitor trading volume, token reserves, and user behavior to identify potential liquidity shortages before they become severe. The early warnings and actionable insights provided by such models are useful not only to financial institutions providing services to consumers but also to regulators overseeing these services, helping to maintain stability and confidence in the DeFi ecosystem. Automated Compliance Checks. Automated compliance checks, performed by generative AI, can revolutionize how businesses comply with regulations by interpreting diverse legal frameworks across jurisdictions. These AI tools will involve sophisticated semantic analysis to understand the nuances of regulatory texts, court decisions, interpretive letters, and other relevant regulatory publications. This technology can dynamically update regulatory databases and algorithms in real time as new regulations are passed, enabling businesses to quickly adapt to regulatory changes. Implementing such AI-powered regulatory tools will enable firms to comply with local and international regulations more efficiently and cost-effectively than ever before, significantly reducing their risk of penalties and legal challenges. Generative AI models are also valuable tools for Web3 and virtual asset service providers (VASPs), accelerating manual tasks such as developing white papers and charters, and creating chatbots for customer service. Other emerging AI tools can help expedite the process of keeping disclosures up-to-date and compliant, as well as ensuring that communications and marketing materials remain within permitted regulatory boundaries. These developments represent the potential for a shift in the industry toward greater efficiency and stronger regulatory compliance. Smart Contract Auditing. Smart Contract Auditing leverages generative AI to dissect and analyze the logic and functionality of smart contracts across multiple platforms and programming languages. Advanced Large Language Models (LLMs) can facilitate detailed scrutiny of complex code logic to identify inconsistencies, vulnerabilities, and compliance issues with existing legal frameworks. These AI systems can learn from past audits to improve their diagnostic accuracy, providing strong support for developers and regulators in verifying the security and legal compliance of smart contracts. The next section further expands on pilot projects exploring such applications. Market Sentiment Analysis. Generative AI can be used to analyze large amounts of unstructured data from social media, forums, and news outlets to assess public sentiment about market conditions or specific assets. By interpreting language and detecting changes in sentiment, such tools can predict potential market movements, providing alerts to traders and investors seeking to respond to market trends, as well as regulators monitoring market manipulation. 4.3 Challenges in AI Implementation Deploying AI systems for regulatory oversight requires addressing a number of challenges to achieve effective and reliable outcomes. We examined key issues such as ethics and privacy, mitigating AI bias, and the need for greater transparency about model behavior. Addressing these challenges is crucial for building trust in the use of AI in regulatory processes, particularly in scenarios requiring supervisory action and judgment. Deploying AI in regulatory settings raises obvious ethical and bias issues that require careful attention. Ethical guidelines are crucial to ensuring that AI decisions, which can profoundly impact individuals' lives, remain fair and effective. Inherent bias in training data or algorithms can lead to biased outcomes that unfairly disadvantage certain groups, undermining the fairness and effectiveness of regulation. Clear disclosure of how data is used, processed, and shared is necessary to promote accountability and build trust among stakeholders. Furthermore, regulators who rely on AI to interpret the vast amounts of data submitted by their regulated entities should ensure that measures are in place to enable AI to explain what data was used and how it was used to draw conclusions. A lack of transparency in data use and adequate traceability of the decision-making process could raise questions about the reliability of decisions affecting them and strain relationships between regulated entities and their regulators. The vast amounts of data that AI systems require access to raise significant privacy concerns. These systems could inadvertently expose sensitive information or misuse data, leading to potential leaks or unauthorized access. The collection, storage, and processing of such data must be subject to strict data protection measures to safeguard individual privacy rights. In the regulatory realm, the integrity of AI responses is vulnerable to challenges posed by "prompt hacking." Users may intentionally or unintentionally provide misleading inputs, thereby influencing the model's decision matrix and, in turn, the quality and reliability of the output. Addressing these vulnerabilities requires advanced real-time monitoring tools to effectively analyze and mitigate potentially malicious prompts. The precision and power of AI-generated responses could foster overreliance among users. Human oversight remains necessary to prevent overreliance on AI systems and ensure the prudent use of AI capabilities. 4.4 Future Directions The integration of advanced AI technologies is expected to impact the development, monitoring, and enforcement of future regulations. We foresee potential advances in predictive analysis and decision-making, as well as emerging technologies that could transform regulatory activities. Advances in predictive analytics have the potential to reshape AI-driven approaches to regulation and supervision. These advances enable not only proactive but also preventative regulatory approaches—anticipating potential compliance issues and regulatory violations before they occur. Machine learning algorithms can be trained to foresee anomalies that precede fraudulent activity or regulatory violations. This allows decision-makers to address potential issues before they escalate, thereby improving the accuracy and timeliness of regulatory interventions. Technological innovations such as quantum computing and advanced neural networks have the potential to expand the analytical capabilities of AI systems, enabling them to process and interpret complex regulatory data at a higher level of sophistication. For example, quantum computing has the potential to process large-scale calculations at unprecedented speeds, facilitating more detailed and comprehensive assessments. Advanced neural networks can learn from more diverse and complex data sets, providing previously unattainable nuanced insights. Meanwhile, theoretical advances in AI ethics and governance are informing the development of frameworks to guide these technologies within recognized societal values and legal standards. As these technologies and frameworks evolve, they will help foster more effective, efficient, and equitable AI-driven regulatory tools. 5. ADGM's AI Innovation Pilots (Joint Effort with the National University of Singapore's AIDF) Abu Dhabi Global Market (ADGM) and the Asian Institute of Digital Finance (NUS AIDF) at the National University of Singapore share a common goal of addressing the risks and regulatory challenges presented by the rapidly evolving Web3 landscape. To this end, they have been conducting joint pilot projects since 2022 to investigate AI technologies that can be used to improve the security audit process for blockchain applications and virtual assets (VAs). These pilots utilize innovative AI techniques to analyze audit logs and review historical security events to identify patterns and provide insights into potential vulnerabilities. This section describes three pilots that demonstrate the potential of AI to advance the regulatory assessment of VAs and the institutions that provide them. 5.1 Pilot 1: AI-Based Smart Contract Suitability Assessment 5.1.1 Introduction Smart contracts are a fundamental component of blockchain technology, enabling the secure and automated execution of agreements and transactions on decentralized platforms. Given their importance in blockchain applications, comprehensive assessment and validation of their codebases are necessary to ensure they operate as expected and meet regulatory standards. This section describes our first pilot project: an AI-powered smart contract suitability assessment platform. 5.1.2 Existing Solutions and Service Providers Current smart contract verification practices combine manual assessments with advanced technical tools to identify potential vulnerabilities and improve efficiency. Leading service providers, including CertiK, Trail of Bits, Halborn, and Hacken, employ a combination of static and dynamic analysis, as well as human-led formal verification, to assess and secure smart contracts against cyberattacks and performance issues. As Web3 technologies enter regulated industries, the paradigm for smart contract verification urgently needs to expand. Beyond identifying technical vulnerabilities, when smart contracts are used to automate regulated activities, their audits should also include compliance checks with relevant regulatory requirements. 5.1.3 AI-Driven Evaluation This pilot program uses two methods to analyze the consistency between smart contract code and VA white papers. The LLM-Based Validator Method uses a proprietary AI model to analyze the alignment between smart contract code and its corresponding VA white paper. Training data preparation begins by extracting clauses and specifications from widely used smart contract code repositories and categorizing them by project type to form the knowledge base required for targeted analysis. Subsequently, a Large Language Model (LLM) is used to extract evidence from the smart contract code and its white paper to verify whether the objectives stated in the white paper are achieved in the code. The model uses a question-and-answer (Q&A) approach to verify each item (Figure 1) and reviews the white paper content across the codebase. The model also performs commonly accepted industry technical checks, such as static code analysis, to identify potential vulnerabilities. Implementation details are then compared to industry practices and relevant standards for consistency. These verifications help ensure that the smart contract executes as expected and meets the operational and compliance standards set out in the white paper. The code generation method uses AI to generate code snippets based on the goals and functionality described in the VA white paper (for example, issuing a token with a maximum supply of 100 million). These generated code snippets are then compared to the original smart contract code: the original code and the AI-generated code are run separately under the same input conditions and the outputs compared. The goal is to verify functional consistency, even though the code structure or style may differ. If the outputs match, the original code is confirmed to have implemented the white paper specifications. If the outputs differ, the code is further reviewed to identify the source of the inconsistency and, if necessary, to make adjustments or reassessments. Optionally, a direct comparison test can be conducted between the AI-generated code and the original contract code (Figure 2). Together, these two approaches form a verification framework for evaluating smart contract implementations, identifying errors and omissions, and ensuring that the contract operates as intended and publicly stated. Such insights can provide regulators with valuable objective evidence to verify the verifiability of project claims. 5.2 Pilot 2: Audit Report Evaluation 5.2.1 Introduction To ensure the security and reliability of the business logic carried by smart contracts, project owners typically hire security audit firms to evaluate the code and publish audit reports. However, reviewing such reports often requires specialized knowledge in computer science and security, which regulators may not possess. To address this knowledge gap, this pilot tested an evaluation framework using the LLM to assess the adequacy of such security audit reports. 5.2.2 Existing Solutions and Service Providers Traditionally, security audit reports rely on automated tools, manual assessments, and expert analysis, a time-consuming process with subjective conclusions. Audits typically require auditors to examine codebases, configurations, and operational procedures to identify vulnerabilities and weaknesses. Because assessments are primarily manual, the workload is intensive. Furthermore, the reliance on human judgment introduces the risk of error and subjectivity, leading to inconsistent interpretations of findings and risks among different auditors. The growing complexity and scale of Web3 projects place higher demands on existing audit methods. Rapid technological development, the pronounced open source nature, and the surge in the number of decentralized applications (DAs) place constant time pressure on auditors, potentially limiting the depth of their analysis. Security audits often only provide a "snapshot" at a specific point in time, potentially overlooking emerging threats and vulnerabilities after the audit. Another significant challenge is technical complexity. Reports are often highly technical and complex in detail, making it difficult for the public and regulators to fully understand and interpret their conclusions. 5.2.3 AI-Based Assisted Assessment of Security Audit Reports This assessment tool uses AI to measure the quality of audit reports. The pilot program first uses optical character recognition (OCR) and customized information retrieval technology to collect and organize the data required for the assessment, including elements such as the audit scope, assessment methodology, audit tools, and problem descriptions in the report. Subsequently, the reports are processed using an off-the-shelf LLM model to generate embeddings, which are represented as vectors as shown in Figure 3. This process utilizes advanced natural language processing (NLP) techniques, such as entity recognition and dependency parsing based on a customized library, to understand and categorize the report content. After data processing, the tool compares and evaluates the stored vectors with a predefined knowledge set (the database shown in the figure below). The knowledge set covers five specific categories: (1) content quality and coverage, (2) vulnerability identification and prioritization, (3) mitigation strategies and report impact, (4) presentation quality and audit methodology, and (5) report relevance and accessibility. The evaluation process is both fast and comprehensive, typically taking about five minutes per report. Finally, LLM is called again to generate the evaluation report. The report contains a total score obtained by weighted summary of the above-mentioned category sub-evaluations, reflecting the overall performance of the security audit report, pointing out areas of strength and areas for improvement. At the same time, the report will also provide a detailed description generated by LLM based on the intermediate evaluation results of each category, explaining its strengths and concerns. The schematic diagram is shown in Figure 3. 5.3 Pilot 3: AI-Based Smart Due Diligence 5.3.1 Introduction Conducting initial and ongoing due diligence on Web3 projects is crucial for regulators during the licensing and ongoing oversight process. Virtual Asset Service Providers (VASPs), acting as virtual asset intermediaries, are also required to conduct their own due diligence on relevant blockchain projects and their tokens before providing virtual assets (VAs) to clients. Web3 due diligence presents unique challenges due to the decentralized nature of blockchains, pseudonymous identities, and new organizational structures. Identifying and verifying real identities, understanding complex technical infrastructure, and navigating diverse organizational structures and evolving legal frameworks all complicate the process. At the same time, publicly available data in the Web3 space can be used to enhance visibility into activity: on-chain data can provide verifiable, real-time insights into transactions and smart contract operations; off-chain qualitative information (such as team qualifications, market sentiment, forum and DAO discussions, and official social media channels) complements this assessment. However, despite the availability of this data, ingesting this vast amount of highly technical information remains challenging, requiring sophisticated processing and analysis tools. The introduction of artificial intelligence (AI) can streamline the due diligence process, enabling regulators and VASPs to more efficiently review and evaluate Web3 projects. 5.3.2 Existing Solutions and Service Providers To address complex data analysis and due diligence needs, numerous service providers have emerged in the Web3 and VASP sectors. These companies offer tools and services that streamline compliance processes, verify identities, and address some regulatory obligations across various jurisdictions. For example, Chainalysis and Elliptic provide blockchain analysis tools that help trace the origin of cryptoasset transactions and support Anti-Money Laundering (AML) and Countering the Financing of Terrorism (CFT) compliance. Other companies offer digital identity verification solutions that aim to identify users in decentralized environments. While these tools are effective in specific areas, they do not yet cover the full spectrum of oversight required by regulators and VASPs. This pilot aims to further improve the overall due diligence process for regulators and VASPs. 5.3.3 AI-Assisted Due Diligence This pilot program incorporates AI technologies in multiple areas to improve due diligence practices by regulators and VASPs. Generative AI supports onboarding. When projects apply for licenses from regulators, generative AI is used to customize the onboarding process based on the specific focus of Web3 projects. The model developed in this pilot program automatically generates personalized forms and lists the required submission documents. This customization avoids a one-size-fits-all process and reduces submission requirements unrelated to the specific business of the enterprise. Generative AI reviews social media. The pilot uses AI tools to monitor and analyze the social media presence of companies and their key personnel, identifying signs of inconsistent public disclosures, reputational risks, and misleading or deceptive statements. The model used understands the context and sentiment of the content and outputs potential areas of concern for regulators to consider. (Note: This paragraph is repeated once in the original text and is presented here in a consolidated format.) The Regulatory Q&A Agent allows regulators to run search-based queries on Web3 project data, including company self-reported documents, smart contract details, official announcements, and disclosures. The agent provides on-demand, accessible insights to non-technical personnel based on the most current data at the time of query. All responses are categorized and sourced, with links to the original data. The system is continuously updated with new data and supports regulators in integrating additional data sources. This pilot effectively replaces repetitive and redundant manual tasks by applying AI to onboarding, risk identification, and real-time supervisory insights. Given that many regulators are actively exploring this type of innovation, the project has the potential for wider deployment and further evolution. 6. Conclusion and Future Work 6.1 Conclusion The rapid evolution of Web3 and VA activities is paving the way for innovation while also creating new and complex regulatory challenges. Integrating AI into supervisory processes has the potential to enhance regulators' toolkits to better monitor, predict, and mitigate risks arising from the Web3 and VA sectors. The pilot project described in this article provides practical examples of AI in this area, demonstrating its real-world impact in improving industry compliance practices. 6.2 Key Takeaways The Transformative Potential of Artificial Intelligence in Web3 SupTech and RegTech · AI-driven solutions can significantly improve the effectiveness of Web3 regulation, including real-time risk analysis, proactive vulnerability detection, and more efficient compliance monitoring. · By applying a variety of AI techniques (such as machine learning, natural language processing (NLP), generative AI, and autonomous agents), regulators can better maintain oversight, optimize reporting processes, detect anomalies, and understand sentiment and public opinion in the decentralized ecosystem.
· Integrating AI into Web3 regulation can simplify cross-jurisdictional complexity, adapt to 24/7 operations, and make compliance frameworks more accessible, flexible, and innovative.
Challenges in AI Implementation
· Ethics and privacy, model bias, and the need for transparency and traceability are key issues.
· Human oversight is essential to reduce over-reliance on AI and ensure the reliability of applications. Practical Applications Demonstrated in the Pilot · AI-enhanced smart contract evaluation helps ensure compliance with white papers and regulatory standards. · Automated evaluation of audit reports and due diligence processes can significantly improve efficiency. · Generative AI tools can support corporate onboarding processes and social media analysis, and efficiently provide useful insights to regulators. · Future Directions · Advances in predictive analytics, adaptive AI systems, and global collaboration will drive more effective regulatory practices.
· Establishing an AI governance framework and ethical standards will be key to maintaining trust and accountability.
6.3 Future Work
Looking forward, several key directions will drive the continued evolution and integration of artificial intelligence (AI) in regulatory processes:
· Advanced AI Models
With advances in AI technology, model capabilities and result quality are expected to further improve, while achieving lower costs and computing resource utilization. · Enhanced Predictive Analytics
Further developments in predictive analytics will support more accurate forewarnings of risks and compliance violations. Leveraging larger and more specialized datasets, as well as more sophisticated algorithms, AI systems can proactively identify issues before they occur, enabling proactive early intervention.
· Advanced AI Governance and Ethics
To ensure ethical, transparent, and bias-reducing AI applications in regulatory contexts, a systematic AI governance framework is imperative. Developing AI ethics standards and guidelines will help build trust and accountability in AI-based regulatory systems. · Adaptive and Explainable AI
Future AI systems should be adaptive, able to continuously learn and evolve as the regulatory environment and Web3 activities change. Improving the explainability of algorithms and decisions will make regulatory decisions more transparent and understandable to the stakeholders affected by them.
· Global Collaboration
Establishing and sharing best practices across jurisdictions will promote more consistent and effective regulation of the global Web3 ecosystem.