Arbitrum DAO Considers $24M Fund For Projects That Failed To Meet Funding Requirements
Thus far, an overwhelming number of ARB holders have backed the proposal.

Author: Hilary J. Allen Source: American University
Ten years after the UK Financial Conduct Authority launched the Fintech Regulatory Sandbox, despite the global adoption of this model, strong empirical evidence remains on the effectiveness of its core principle—a combination of regulatory easing and guidance. Existing evidence suggests only that sandboxes benefit participating firms, but fails to demonstrate their impact on the overall regulatory system or the widespread benefits of innovation. Two key concerns raised at the inception of sandboxes—the weakening of regulatory effectiveness and questionable effectiveness in promoting regulatory learning—have not been alleviated in the decade since their introduction, and in some cases have even intensified. While design optimizations can mitigate some of these issues, the fundamental challenge lies in re-examining the sandbox model itself, particularly as it is being promoted to foster innovation in generative AI. Given that generative AI struggles to scale beyond its inherent limitations and has already had significant negative impacts on privacy, intellectual property, and the ecosystem, hastily adopting sandbox mechanisms that could weaken legal protections to promote AI is too risky. The Fintech Research Institute of Renmin University of China has compiled the core research findings.
Regulators around the world and across various sectors are actively exploring regulatory pathways appropriate for technological innovation. In 2015, the UK's Financial Conduct Authority (FCA) announced the establishment of a fintech regulatory sandbox mechanism, and over the following decade, this model has rapidly spread globally. The core design of the regulatory sandbox is to allow selected companies to conduct limited product pilots in an environment with reduced regulatory constraints and enforcement risks. Its objectives are twofold: first, to lower barriers to entry that could hinder fintech innovation; second, to provide regulators with an opportunity to learn about emerging technologies and adjust their regulatory strategies during the sandbox oversight process. In recent years, policymakers around the world have also expressed strong interest in using sandbox mechanisms to promote artificial intelligence innovation and establish new AI regulatory frameworks. However, a decade of fintech sandbox practice has demonstrated that there is insufficient evidence to support its application as a policy tool in the AI sector. Despite the widespread adoption of regulatory sandboxes, empirical evidence to assess their effectiveness in achieving their objectives remains scarce. Existing empirical research focuses on innovation indicators: participating firms' ability to raise capital, the number of patents acquired, and so on. Such data cannot reveal the impact of sandboxes on the overall regulatory landscape for fintech, nor can they demonstrate whether the innovations fostered by sandboxes benefit groups beyond the innovators themselves. This lack of data support is crucial—the prospects for fintech sandboxes to achieve their objectives are bleak. First, it is unclear whether fintech innovations can generate sufficient societal benefits to justify relaxing key regulatory provisions designed to protect consumers and the financial system. Second, the knowledge regulators gain from these experiments is significantly limited due to the unrepresentative sample of sandbox participants and the unique environment that is prone to regulatory capture. The channels through which regulators can share this knowledge are also constrained. In 2016, the UK's Financial Conduct Authority (FCA) defined its first regulatory sandbox as "a 'safe space' where firms can test innovative products, services, business models, and delivery mechanisms, ensuring adequate consumer protection." Over the following decade, FCA sandbox participants primarily focused on leveraging technology to develop new credit, investment, banking, and payment products. Numerous jurisdictions around the world have followed suit and established fintech regulatory sandboxes. Although there are significant differences in the structure and objectives of sandboxes designed by different regulators, their core objectives generally include the following elements:
1. Support fintech companies that seek to provide innovative products, services or business models;
2. Build a more efficient financial services system with better risk management;
3. Clarify the interaction between emerging technologies and business models and the regulatory framework, and identify potential market entry barriers;
4. Promote effective competition that benefits consumers;
Enhance the accessibility of financial services. Regulatory sandboxes are widely viewed as a win-win-win: helping innovators access funding and accelerate product launches; ensuring consumers have greater access to fintech products; and educating regulators about fintech products and their compatibility with regulatory frameworks (not to mention cultivating a jurisdiction's reputation as "innovation-friendly"). Since its inception by the FCA, the regulatory sandbox concept has expanded beyond fintech to encompass diverse scenarios such as autonomous driving and legal practice. A 2023 report from the Organization for Economic Cooperation and Development (OECD) indicated that approximately 100 sandbox initiatives were currently in place worldwide. In the field of artificial intelligence, in particular, calls for sandboxes to facilitate AI experimentation through regulatory suspension are growing. Regulatory sandboxes offer multiple advantages: 1. Promoting Innovation: AI technology is evolving rapidly, making it difficult for the regulatory environment to keep pace. Sandboxes mitigate compliance risks in technology development within a controlled environment. Practice has shown that they can significantly shorten the time-to-market for innovative products, enhance legal certainty for businesses, and thus stimulate innovation. 2. Improving Response Speed: Current legislative processes, such as the EU's "AI Directive," are slow. The bill, proposed in April 2021, remains under review and is not expected to take effect before 2025/26. Furthermore, once enacted, such legacy legislation is extremely difficult to amend to accommodate technological developments. To some extent, this legislation, which predates the emergence of generative AI like ChatGPT, is now outdated. In contrast, sandboxes are flexible and responsive tools that can be quickly adjusted to address new challenges. 3. Strengthening Consumer Protection: AI systems can potentially harm consumers. Sandboxes ensure technological safety by testing these systems in a controlled environment, identifying and mitigating potential risks, and thus maintaining consumer confidence in emerging technologies. 4. Promoting Collaborative Governance: Sandboxes bring together regulators, businesses, and other stakeholders to jointly advance the development of AI technology, balancing the need for innovation with public safety and fostering more effective regulatory rules. This two-way learning experience between regulators and regulated entities creates a win-win situation, enhancing trust in the technology and accelerating its adoption. In practice, some jurisdictions have already launched AI sandbox testing. Fintech sandbox operators in countries like the UK and Singapore have begun exploring the financial applications of AI. (At least one bill has been proposed in the US to establish a sandbox for financial institutions to conduct AI experiments.) Dedicated AI sandboxes, independent of financial regulation, have also emerged: the UK, Norway, and other places have established AI sandboxes focused on privacy regulations. With the EU's "AI Directive" requiring member states to operate at least one AI regulatory sandbox or participate in a multinational joint sandbox by August 2, 2026, such mechanisms are expected to proliferate within the EU in the coming years. The directive foresees the possibility of cross-border AI sandboxes. Given the multi-jurisdictional needs of AI companies and the cross-sector nature of AI technology, sandboxes within a single jurisdiction will also require multi-departmental regulatory coordination. To address the cross-border nature of financial services, the Global Network of Innovative Financial Institutions (GFIN) was established in 2019. Its "Cross-Border Testing (CBT)" mechanism, also known as the "Global Sandbox," aims to "create an environment that allows companies to continuously or simultaneously test new technologies, products, or business models across multiple jurisdictions." In October 2020, GFIN launched its first round of cross-border testing applications, requiring applicants to meet the entry criteria of all target jurisdictions. The implementation has been less than satisfactory: only 9 of 38 applications passed the assessment, and ultimately only 2 companies entered the real-world testing phase. The mechanism has yet to launch a second round, casting a shadow of concern for cross-border sandbox implementation. But is the existing empirical evidence sufficient? The UK Financial Conduct Authority (FCA) published its first regulatory sandbox "report card" in 2017, conducting a self-assessment of its initial experiments. The report positively acknowledges the effectiveness of the sandbox in the following areas: 1. Shortening the time to market for innovative results and potentially reducing costs 2. Broadening innovators' access to financing by reducing regulatory uncertainty 3. Enabling more products to enter testing and potentially reach the market 4. Promoting collaboration between regulators and innovators to embed consumer protection mechanisms into new products and services The first three objectives directly benefit innovators, while the last focuses on the public interest - the FCA's satisfaction with the fourth is partly based on "working with companies to develop customized testing safeguards." To date, independent empirical research on regulatory sandboxes remains insufficient. A major study published in 2024 by economists at the Bank for International Settlements (BIS) noted that "despite widespread adoption and policy attention, there is a lack of systematic empirical evidence on whether regulatory sandboxes actually help fintech firms raise capital, innovate, or establish viable business models." Analyzing capital acquisition, survival rates, and patent data for UK sandbox firms, the BIS confirmed that "sandboxes achieve one of their core objectives: helping emerging fintech firms raise capital and spur innovation." Similar research, like the FCA's self-assessment, focuses on the impact of sandboxes on innovators, demonstrating that participation in sandboxes is beneficial to businesses. However, this conclusion may raise concerns about government "winner-picking": businesses not selected may face a more challenging innovation environment. While BIS researchers acknowledge that the financing advantages offered by sandbox participants "fit with the logic of sandboxes lowering information barriers to investment and financing and the uncertainty costs of compliance," they do not rule out another explanation: that sandbox admission itself may serve as a credit endorsement, facilitating corporate financing. More importantly, the limited research available only scratches the surface of the question of whether regulatory sandboxes are generally beneficial to policy. BIS authors emphasize that "the research results do not necessarily prove that sandboxes explicitly enhance social welfare. Sandbox operations often require public funding, and facilitating corporate financing is only one objective—improving consumer welfare is as important as maintaining financial stability." Furthermore, the BIS study is based on the assumption that sandboxes enable regulators to predict the social welfare impact of products before they are launched. Recent research by law professor Doug Sarro, based on the cryptocurrency sandbox practices of Canadian securities regulators, suggests that the impact of sandboxes on consumer welfare and financial stability persists even after a product is released to the public. Saroo found that, despite the general expectation that firms would become fully compliant upon "graduation," Canada's provincial securities regulators "not only oversee trading platforms within the sandbox, but also regulate them long after they (nominally) exit the sandbox." He further questioned the effectiveness of consumer protection measures tailored for the sandbox: Regulators often fail to anticipate emerging risks on trading platforms, taking action only when risks resemble those in the traditional securities sector or when they have already caused significant consumer harm and raised public concerns. A 2019 report by the United Nations Secretary-General's Special Initiative for Financial Inclusion (UNSGSA) and the Cambridge Centre for Alternative Finance (CCAF) also raised other grounds for skepticism, with the following core conclusions: Early experience with regulatory sandboxes suggests that they are neither necessary nor sufficient for promoting financial inclusion. While sandboxes have advantages, they are complex to establish and expensive to operate. Practice has shown that most regulatory issues raised in sandbox testing can be effectively addressed without a live testing environment. Similar results can be achieved more cost-effectively through tools such as innovation offices. In other words, resource-intensive fintech sandboxes may be more effective if relocated (the report notes that regulators in many countries were surprised by the resource intensity of sandboxes). The primary reason for this resource-intensive nature is that regulators must provide customized guidance to participants—this "regulatory support" is costly, but its absence can lead to concerns about sandbox effectiveness (as assessed from the perspective of participating companies). These findings inevitably raise deeper questions: Is the regulatory exemption provided by sandboxes truly necessary to promote fintech innovation? Simply providing guidance may be sufficient to spur innovation (and most financial regulators have established "innovation centers" to provide such services). But the more fundamental question is: Is leveraging public resources to foster private sector innovation in the public interest? Previous research has revealed multiple pitfalls of this model: regulators' selection of sandbox companies effectively "picks winners," undermining regulatory fairness; sandbox operation and maintenance costs often exceed expectations; benefits accrue disproportionately to innovators rather than the public; and as sandboxes spread globally, the marginal benefits of "innovation-friendly" policy signals continue to diminish. Recent research has focused on a core contradiction: fintech sandboxes require the suspension of key regulations intended to protect consumers and the financial system. Sandbox proponents implicitly accept the potential for increased public harms, basing their theory on two assumptions: first, innovation will benefit the public through increased efficiency and competition; and second, sandboxes will help regulators understand the market performance of new technologies, thereby optimizing long-term regulation. However, this section will demonstrate that these assumptions do not hold up under scrutiny in fintech, and are equally difficult to apply to artificial intelligence. It's important to note that innovation does not necessarily benefit society as a whole. While considered a necessary condition for improving efficiency and competition, the specific meanings of "efficiency" and "competition" remain subject to contextual debate, and many interpretations are detrimental to overall social welfare. Furthermore, when financial regulators transform themselves into cheerleaders and sponsors of their chosen innovations, their objectivity and willingness to share knowledge are undermined, even as regulatory understanding is already biased by the selectivity of sandbox participants. A. Sandboxes as a Field of Regulatory Learning Participation in sandboxes is purely voluntary, so only innovative entities that actively apply can be accommodated. This creates a double blind spot: regulators are unable to understand fully compliant companies that don't need to participate in the sandbox, nor are they able to identify entities that believe they are not subject to existing regulations. Even among applicants, the selection criteria are often unclear, leading to a large number of applications being rejected without clear rationale. The knowledge regulators gain from sandboxes is therefore inherently biased. While learning from biased samples can be valuable, sandboxes should not be considered the only or best way to acquire knowledge. As UN agencies have observed, regulators can learn about new technologies from startups through informal channels. Regulatory deregulation is by no means a prerequisite for understanding fintech or artificial intelligence. Another flaw in sandbox-generated regulatory knowledge is that the entry mechanism fosters an unnatural relationship between government and business, exacerbating the risk of "regulatory capture." Simply put, "regulatory capture" refers to regulators prioritizing industry interests over the public interest, often through explicit (such as corruption) or implicit incentives. A typical example of implicit capture is when regulators primarily source information from the industry itself (without consulting independent researchers or consumer groups), inevitably permeating and assimilating their understanding of the industry's perspective. This process is known as "cognitive capture," and the apparent technological complexity of fintech business models is particularly conducive to this phenomenon. If regulators fail to establish a baseline of technical knowledge through talent acquisition or internal training, their ability to critically evaluate industry claims will be hampered. This issue is equally prominent in AI regulation, where global AI companies are actively capturing regulators with narratives such as "regulation slows innovation" and "forcing entrepreneurial exodus." In summary, whether sandboxes can truly enhance regulators' ability to perform their duties is questionable. I have previously argued that "regulatory sandboxes may occasionally assist financial regulators in fulfilling their risk management responsibilities, but their popularity stems from the superficial assumption that catering to private sector fintech innovations is necessarily in society's best interest." The following will examine the rationale for this assumption. B. Innovation as a Regulatory Objective As law professor Deirdre Achen has argued, the concept of regulatory sandboxes is based on the "public interest role of regulators in improving consumer choice, prices, and efficiency"—a fundamental difference from the "risk-focused" regulatory logic. However, there is ample reason to question whether the "competition" and "efficiency" fostered by fintech sandboxes truly benefit the public. Abandoning risk control may well prove to be a miscalculation. There are growing signs that doubts about the public benefits of AI innovation are equally valid. Against this backdrop, the rationale for policies that weaken public protection mechanisms in order to accommodate innovation is questionable—precisely the inherent logic of sandbox design. 1. Limitations of Fintech and Generative AI Innovation Policies that promote innovation primarily benefit the innovators themselves. This theoretical assumption assumes that innovation will generate secondary benefits that benefit others. However, in reality, not all innovation is win-win, and this assumption may not hold true. For example, Doug Saroo's research on Canada's cryptocurrency sandbox found that "regulatory practice at least partially supports concerns that sandboxes may prioritize innovators over consumers." Previous research by me and other scholars has also revealed that many fintech products offer little substantive technological innovation beyond sleek user interfaces. Some even engage in harmful "predatory co-option"—ostensibly serving previously excluded marginalized groups, but in reality engaging in systemic exploitation. Fintech's profits often stem not from technological superiority but from circumventing required consumer protection regulations in the name of "innovation." Growing evidence supports the validity of the same skepticism surrounding generative AI (AI, broadly defined, encompasses a wide range of technologies; generative AI specifically refers to tools that generate new content by identifying patterns in massive amounts of training data). Since 2024, academics have begun sharply questioning the actual value of generative AI. For example, Jim Coviello, Goldman Sachs' head of equity research and a veteran of the tech industry since the dot-com bubble, has noted that the generative AI developed in Silicon Valley lacks clear application scenarios. He further warned: "Never before in history has a technology been predicted to be valued at a trillion dollars upon its introduction... In the past, technological iterations always replaced expensive solutions with cheaper ones. Now, expensive technology is attempting to replace low-cost labor. This logic is fundamentally untenable." The core flaw of this form of AI is its tendency to hallucinate: the model frequently generates seemingly authoritative but in fact false responses. Typical errors include: a Google model suggesting that adding Elmer's glue would make pizza more stringy; an OpenAI model failing to correctly spell the number of "r"s in the word "strawberry." Furthermore, AI often fabricates literature to support its conclusions: a BBC 2025 study found that "13% of BBC citations by AI assistants were either falsified or did not correspond to the original text." Companies deploying such models without human oversight could incur heavy costs, as evidenced by the experience of Air Canada: after its chatbot incorrectly answered a funeral policy inquiry, the airline argued that the chatbot was solely responsible. However, a civil court ordered the company to compensate the customer and impose a fine. While introducing "human intervention mechanisms" can reduce the risk of error, it also negates the cost advantages AI is intended to achieve. Detecting and correcting AI's illusory output requires significant expertise: A 2024 study by the freelancing platform Upwork found that 96% of executives expect AI tools to improve corporate productivity (39% mandate their use and 46% encourage their use), yet nearly 47% of employees using AI admitted they "didn't understand how to achieve the efficiency targets their employers demand." Given these limitations, the limited commercial application scenarios for generative AI are unsurprising. The widespread resistance of businesses to such tools may be a blessing in disguise—recent research reveals a significant negative correlation between reliance on AI tools and critical thinking skills. While AI is touted as a tool that "liberates humans from basic tasks to focus on high-level creativity," the reality is that high-level capabilities often stem from the refinement of fundamental practices. 2. The Deep Crisis of Innovation-Driven Regulation Even when examining the sandbox mechanism beyond specific sectors, legitimate questions remain about this regulatory tool. Policymakers must be particularly wary of the distorted incentives fostered by sandboxes. Ideally, legal and regulatory agencies should send a clear signal to the industry that compliant innovation safeguards the public interest. However, sandboxes can be interpreted as sacrificing legal authority to facilitate innovation. "Competition" and "efficiency" are essentially Rorschach inkblot tests reflecting regulators' values. For example, "efficiency" carries varying value judgments across different sectors, making it impossible to serve as a neutral, unified regulatory objective. Efficiency and competition objectives offer little clarity for regulators: When evaluating sandboxes, regulators must ask, "From whose perspective are we judging competition and efficiency? Participating companies, the industry as a whole, or the public?" Rather than painstakingly constructing sandboxes to accommodate innovation, regulators should adopt proactive preventative strategies to curb the public harms of new technologies. Former Acting Comptroller of the Currency Michael Suh proposed a "accommodate and tame" regulatory framework for fintech, a model that also applies to the regulation of technological innovation broadly. Accommodative policies can potentially endorse flawed technologies and artificially sustain unviable business models. Given that innovators generally lack a holistic understanding of their operating environment (as mentioned above), taming is often the preferred path. Technocultural scholar Arati Wad notes regarding AI tools: Experts in AI technology are far less able to assess its sociopolitical impact than professionals in the fields it purports to disrupt. Professional groups like doctors, teachers, social workers, and policymakers are not outsiders when discussing AI—they are precisely those best positioned to understand the potential misuse of automation in their fields. It's important to be clear: while written regulations may sometimes need to evolve for the public good, caution is warranted when regulatory changes are implemented piecemeal and primarily benefit a small number of sandbox-enclosed companies. If regulators truly need to experiment with new strategies, numerous industry-wide tools were already available before the sandbox was created. In its assessment of fintech sandboxes, a UN agency emphasized that "proportionality or risk-based licensing can reduce compliance costs for startups and, unlike sandbox testing, encompass all market participants." While informal regulatory approaches may be effective when dealing with rapidly evolving technologies, they always come with costs—particularly a lack of public participation and transparency in regulatory decision-making. These costs are particularly acute in the sandbox context: private companies have significant influence over regulatory terms, leaving affected groups unaware of the terms and less likely to raise objections. When sandbox companies' products are technologically complex, regulators often succumb to their "technical authority," making it easier for them to dictate terms. Regulators' tendency to act as "cheerleaders" for sandbox companies leads to a continued weakening of regulatory standards. The Canadian case demonstrates that even after "graduation," cryptocurrency companies still fail to operate in compliance because their profitability relies on regulatory arbitrage rather than technological innovation. When temporary exemptions expire, regulators face a dilemma: force compliance, which could lead to business closures, or make the exemptions permanent. Political and economic realities often force the latter: the employee-customer ecosystems formed by businesses foster a network of vested interests, making it difficult for regulators to tighten regulations. The result is fragmented rules, with different standards applied to different businesses, creating an uneven playing field and completely contradicting the sandbox's original purpose of fostering comprehensive compliance. Policymakers must clearly understand that once a company enters the sandbox, regulators are trapped in a dilemma of passive accommodation, forced to perpetuate public risks. The fundamental solution lies in shifting to a taming model—constraining the boundaries of innovation through a unified regulatory framework, rather than sacrificing public interest for technological development. C. The Governance Dilemma of Cross-Border Sandboxes. The EU's "AI Directive" promotes cross-border sandbox mechanisms, highlighting the unique challenges of cross-border regulation: the conflict between companies' multi-jurisdictional operations and their reliance on smaller jurisdictions. However, cross-border implementation faces profound obstacles—fragmented regulatory standards, high coordination costs, and conflicting policy signals—further supporting legitimate skepticism about sandbox tools. The Global Network of Financial Institutions (GFIN), established in 2019 with the goal of operating a cross-border fintech sandbox, has so far only successfully completed one cross-border trial, and only two companies have entered the real-world testing phase. A key factor contributing to the low adoption rate is the need for participants to meet the diverging regulatory requirements of different jurisdictions. To reduce the cost of multi-jurisdictional consensus coordination, GFIN employs a "lead regulator" mechanism, but admits that: The lead regulator bears significant resource pressures—responsible for coordinating and managing 38 applications with 23 regulatory agencies, devoting significant manpower and resources to ensuring that questions from businesses and regulators are promptly addressed and that the application process proceeds on time and in compliance. Improving the effectiveness of cross-border sandboxes inevitably requires the harmonization of legal standards, but cross-border coordination is a highly politicized process, often subject to the manipulation of domestic interest groups. Any benefit of sandbox "policy signals" will be diminished during the coordination process—if all jurisdictions adopt uniform standards, there will be no "innovation-friendly jurisdiction." Difficulties in allocating resources and responsibilities will persist—both in cross-border operations and in domestic cross-agency collaboration. Despite the sandbox's reputation for promoting new technologies, these resource coordination challenges are well-established, and regulatory sandboxes offer no innovative solutions. V. Conclusion This article builds on my previous research by arguing that in the fintech sector, regulators should prioritize public risk prevention over promoting efficiency and competition through private innovation. Emerging evidence suggests that this principle also applies to generative artificial intelligence—leading to multiple concerns about the implementation of AI sandboxes. While sophisticated sandbox design can mitigate some of these concerns, we should not jump directly to technical solutions, bypassing fundamental questions: It is imperative to reexamine the applicability of regulatory sandboxes in specific contexts. Society urgently needs a collective reckoning with the "Silicon Valley cult of innovation," and heightened vigilance against sandbox models (and the regulatory mindset they foster) should be a core component of this rethinking. After all, over a decade after the UK Financial Conduct Authority first introduced the regulatory sandbox, there is still little conclusive evidence that these resource-intensive regulatory tools actually enhance public welfare. Fourth, Deeper Concerns
Thus far, an overwhelming number of ARB holders have backed the proposal.
Bitwala re-establishes itself in the cryptocurrency sector through a collaboration with Striga, enhancing its European crypto banking services.
Within the updated guidelines, stringent measures have been introduced; imposing penalties on VASPs found operating without the requisite licenses within the jurisdiction.
Farley's company Bullish is among the final three contenders for FTX's acquisition, which is expected to be completed by the end of the year.
In a surprising twist, renowned economist Nouriel Roubini, who was once a staunch critic of cryptocurrencies, has launched his own digital token
This initiative is part of Visa Consulting & Analytics (VCA) and draws upon over 1,000 analysts and consultants from across the globe, building on Visa's 30-year heritage in AI.
Gucci, the luxury fashion giant, has introduced its newest digital creation: Gucci Cosmos Land in The Sandbox metaverse. This immersive adventure transports a historical London exhibit into the digital realm, offering a one-of-a-kind exploration of the brand's rich heritage to a global audience.
Bitcoin fees have surged to a six-month high, causing concerns among users as Ordinals activity makes a strong comeback.
Gemini's compliance with the UK Travel Rule raises worries about restrictions on crypto users' freedom and self-custody rights.
The US SEC counters Binance's plea to dismiss the lawsuit, asserting that the cryptocurrency exchange has not accurately applied the law.