Nasdaq files Nasdaq Bitcoin Index Options (XBTX) application with SEC
The U.S.-based stock exchange has submitted an application to launch a Bitcoin fund-based options offering, following in the footsteps of BlackRock.
Cheng YuanAuthor: Lucas Tcheyan, Galaxy associate researcher; Translation: 0xjs@金财经
The emergence of public chains is one of the most profound advances in the history of computer science. But the development of AI will and is already having a profound impact on our world. If blockchain technology provides a new template for transaction settlement, data storage and system design, then artificial intelligence is a revolution in computing, analysis and content delivery. Innovations in these two industries are unlocking new use cases that are likely to accelerate adoption in both industries in the coming years. This report explores the continuous integration of cryptocurrency and AI, focusing on novel use cases that seek to bridge the gap between the two and harness the power of both. Specifically,this report examines the development of decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and AI agents. project.
Cryptocurrency provides a permissionless, trustless and composable settlement layer for AI. This unlocks use cases such as making hardware more accessible through decentralized computing systems, building AI agents that can perform complex tasks that require value exchange, and developing identity and provenance solutions to combat Sybil attacks and deepfakes. AI brings many of the same benefits to cryptocurrencies that we saw in Web 2. This includes an enhanced user experience (UX) for users and developers due to large language models (i.e. specially trained versions of ChatGPT and Copilot) and the potential to significantly improve smart contract functionality and automation. Blockchain is the transparent, data-rich environment required for AI. But blockchain also has limited computing power, which is a major obstacle to directly integrating AI models.
The driving forces behind ongoing experimentation and eventual adoption at the intersection of cryptocurrencies and AI are the same ones driving cryptocurrencies’ most promising use cases – access to a permissionless and trustless orchestration layer that enables Better facilitate value transfer. Given the huge potential, players in the field need to understand the fundamental ways in which these two technologies intersect.
In In the near future (6 months to 1 year), the integration of cryptocurrencies and AI will be dominated by AI applications that improve developer efficiency, smart contract auditability and security, and users’ Accessibility. These integrations are not cryptocurrency specific but enhance the on-chain developer and user experience.
Just as there is a serious shortage of high-performance GPUs, decentralized computing products are implementing AI-customized GPU products, providing impetus for adoption.
User experience and regulation remain barriers to attracting decentralized computing customers. However, the latest developments in OpenAI and ongoing regulatory scrutiny in the United States highlight the value proposition of permissionless, censorship-resistant, decentralized AI networks.
On-chain AI integration, especially smart contracts that can use artificial intelligence models, need to be improved zkML technology and other computational methods for validating off-chain computations. Lack of comprehensive tooling and developer talent, as well as high costs, are barriers to adoption.
AI agents are ideal for cryptocurrencies, the user (or the agent itself ) can create wallets to conduct transactions with other services, agents, or people. This is currently not possible using traditional financial methods. For wider adoption, additional integrations with non-crypto products are required.
AI< /span> is the use of computing and machines to imitate human reasoning and problem-solving abilities.
Neural network is a training method for artificial intelligence models. They run the input through discrete layers of algorithms, improving it until they produce the desired output. Neural networks consist of equations with weights that can be modified to change the output. They can require large amounts of data and calculations to train so that their output is accurate. This is one of the most common ways to develop AI models (ChatGPT uses a neural network process that relies on the Transformer).
Training is the process of developing neural networks and other artificial intelligence models. It requires large amounts of data to train the model to correctly interpret the input and produce accurate output. During the training process, the weights of the model equations are continuously modified until satisfactory output is produced. Training can be very expensive. For example, ChatGPT uses tens of thousands of its own GPUs to process data. Teams with fewer resources often rely on specialized compute providers such as Amazon Web Services, Azure, and Google Cloud providers.
Inference is the actual use of an AI model to obtain an output or result (e.g., Use ChatGPT to create an outline for a paper on the intersection of cryptocurrency and AI). Inference is used throughout the training process and in the final product. They can be expensive to run even after training is complete due to computational cost, but are less computationally intensive than training.
Zero-knowledge proof (ZKP) Allows basic information to be disclosed without leaking case to verify the statement. This is useful in cryptocurrencies for two main reasons: 1) privacy and 2) scaling. To protect privacy, this enables users to conduct transactions without revealing sensitive information such as how much ETH is in the wallet. For scaling, it enables off-chain computations to be proven on-chain faster than re-executing computations. This enables blockchains and applications to cheaply run computations off-chain and then verify them on-chain.
Projects at the intersection of AI and cryptocurrency are still building the underlying infrastructure needed to support large-scale on-chain AI interactions.
The decentralized computing market is emerging to provide training and inference artificial intelligence A large amount of physical hardware is required for intelligent models, mainly in the form of GPUs. These two-sided marketplaces connect those leasing and seeking lease calculations, facilitating the transfer of value and verification of calculations. Within decentralized computing, several subcategories are emerging that offer additional functionality. In addition to two-sided markets, this report will examine machine learning training providers that specialize in providing verifiable training and fine-tuning outputs, as well as projects dedicated to connecting computation and model generation to enable AI, also often referred to as intelligent incentive networks.
zkML hopes to be provided on the chain in a cost-effective and timely manner An emerging focus area for projects that validate model outputs. These projects primarily enable applications to handle heavy computing requests off-chain and then publish verifiable output on-chain, proving that the off-chain workload is complete and accurate. zkML is expensive and time-consuming in current instances, but is increasingly being used as a solution. This is evident in the increasing number of integrations between zkML providers and DeFi/gaming applications that want to leverage AI models.
Sufficient computing supply and the ability to verify on-chain calculations open up for on-chain AI agents The gate. Agents are models that are trained to perform requests on behalf of users. Agents offer the opportunity to significantly enhance the on-chain experience, allowing users to perform complex transactions simply by talking to a chatbot. For now, however, the Agent Project remains focused on developing infrastructure and tools to enable easy and fast deployment.
AI requires a lot of computing to train models and run inference. Over the past decade, computational demands have grown exponentially as models have become more complex. For example, OpenAI found that from 2012 to 2018, the computational requirements of its models went from doubling every two years to doubling every three and a half months. This has led to a surge in demand for GPUs, with some cryptocurrency miners even repurposing their GPUs to provide cloud computing services. As competition for access to computing intensifies and costs rise, several projects are leveraging cryptography to provide decentralized computing solutions. They offer on-demand computing at competitive prices so teams can affordably train and run models. In some cases, the trade-off is performance and security.
State-of-the-art GPUs, such as those produced by Nvidia, are in high demand. In September 2023, Tether acquired a stake in German Bitcoin miner Northern Data, which reportedly spent $420 million to purchase 10,000 H100 GPUs (one of the most advanced GPUs used for AI training). The wait time to get top-notch hardware can be at least six months, and in many cases even longer. To make matters worse, companies are often required to sign long-term contracts to gain access to computing volumes they may not even use. This may lead to a situation where available computing exists but is not available on the market. Decentralized computing systems help address these market inefficiencies, creating a secondary market where computing owners can sublease their excess capacity at a moment’s notice, thereby freeing up new supply.
Aside from competitive pricing and accessibility, a key value proposition of decentralized computing is censorship resistance. Cutting-edge AI development is increasingly dominated by large technology companies with unrivaled computing and data access. The first key theme highlighted in the AI Index Report 2023 annual report is that industry is increasingly surpassing academia in the development of AI models, concentrating control in the hands of a few technology leaders. This has raised concerns about their ability to have outsized influence in shaping the norms and values that underpin AI models, especially after these tech companies push for regulation to limit the development of AI beyond their control.
Several decentralized computing models have emerged in recent years, each with its own priorities and trade-offs.
< /p>
Akash, io.net, iExec, Cudos< Projects such as /span> are decentralized computing applications that, in addition to data and general computing solutions, also provide or will soon provide access to specialized computing for AI training and inference.
Akash is currently the only fully open source "super cloud" platform. It is a proof-of-stake network using the Cosmos SDK. AKT is Akash’s native token and serves as a form of payment to secure the network and incentivize participation. Akash launched its first mainnet in 2020, focusing on providing a permissionless cloud computing marketplace, initially featuring storage and CPU rental services. In June 2023, Akash launched a new testnet focused on GPUs, and launched the GPU mainnet in September, enabling users to rent GPUs for artificial intelligence training and inference.
There are two main players in the Akash ecosystem - tenants and suppliers. Tenants are users who want to purchase Akash network computing resources. A vendor is a provider of computing resources. To match tenants and vendors, Akash relies on a reverse auction process. Tenants submit their compute requirements, where they can specify certain conditions, such as the location of servers or the type of hardware to perform the compute, as well as the amount they are willing to pay. Suppliers then submit their asking prices and the lowest bidder gets the task.
Akash validators maintain the integrity of the network. The validator set is currently limited to 100, with plans to increase it over time. Anyone can become a validator by staking more AKT than the validator currently staking the smallest amount of AKT. AKT holders can also delegate their AKT to validators. The network’s transaction fees and block rewards are distributed in AKT. Additionally, for each lease, the Akash network earns a “collection fee” at a community-determined rate and distributes it to AKT holders.
The decentralized computing market aims to fill the inefficiencies of the existing computing market. Supply constraints lead companies to hoard computing resources beyond what they may need, and supply is further constrained as contract structures with cloud providers lock customers into long-term contracts even when ongoing access may not be needed. Decentralized computing platforms unlock new supply, allowing anyone in the world with a computing need to become a supplier.
Will the surge in demand for GPUs for AI training translate into Long-term network usage on Akash remains to be seen. Akash, for example, has long provided a marketplace for CPUs, offering similar services to centralized alternatives at a 70-80% discount. However, the lower price did not lead to significant adoption. Active leases on the network have flattened, averaging just 33% compute, 16% memory, and 13% storage as of Q2 2023. While these are impressive metrics for on-chain adoption (for reference, leading storage provider Filecoin had a Q3 2023 storage utilization of 12.6%), this suggests that supply of these products continues to exceed demand.
It’s been more than half a year since Akash launched its GPU network, and it’s too early to accurately assess long-term adoption rates. To date, GPU utilization has averaged 44%, higher than CPU, memory, and storage, a sign of demand. This is primarily driven by demand for the highest quality GPUs such as the A100, with over 90% leased.
Akash's Daily Spending has also increased, almost doubling what it was before the advent of GPUs. This is partly due to increased usage of other services, especially CPU, but is mostly a result of new GPU usage.
Pricing and Lambda Cloud Comparable to centralized competitors like Vast.ai (or in some cases even more expensive). The huge demand for the highest-end GPUs, such as the H100 and A100, means that most owners of the device have little interest in launching it in a market facing competitive pricing.
While initial interest is promising, adoption remains There are barriers (discussed further below). Decentralized computing networks need to do more to generate demand and supply, and teams are experimenting with how best to attract new users. For example, in early 2024, Akash passed Proposition 240 to increase AKT emissions for GPU suppliers and incentivize more supply, specifically targeting high-end GPUs. The team is also working on launching a proof-of-concept model to demonstrate the real-time capabilities of its network to potential users. Akash is training their own base models and has launched chatbot and image generation products that can create output using Akash GPUs. Likewise, io.net has developed a stable diffusion model and is rolling out new networking capabilities to better emulate the performance and scale of traditional GPU data centers.
In addition to general computing platforms that can meet AI needs, a group of professional AI GPU vendors focusing on machine learning model training are also emerging. For example, Gensyn is "coordinating power and hardware to build collective intelligence," arguing that "if Someone wants to train something, and someone is willing to train it, then that training should be allowed to happen."
The The protocol has four main participants: submitters, solvers, verifiers and whistleblowers. Submitters submit tasks with training requests to the network. These tasks include the training objectives, the model to be trained, and the training data. As part of the submission process, submitters are required to pay upfront for the estimated computational effort required by the solver.
After submission, the task will be assigned to the solver who will actually train the model. The solver then submits the completed task to the verifier, who is responsible for checking the training to ensure it was completed correctly. It is the responsibility of the whistleblower to ensure that validators act honestly. To incentivize whistleblowers to participate in the network, Gensyn plans to regularly provide evidence of intentional errors, rewarding whistleblowers for catching them.
In addition to providing compute for AI-related workloads, Gensyn's key value proposition is its verification system, which is still under development. Validation is necessary to ensure that the GPU vendor's external computations are performed correctly (i.e., to ensure that the user's model is trained the way they want). Gensyn takes a unique approach to solving this problem, leveraging novel verification methods called “probabilistic learning proofs, graph-based precision protocols, and Truebit-style incentive games.” This is an optimistic solving mode that allows the verifier to confirm that the solver has run the model correctly without having to completely rerun the model themselves, which is a costly and inefficient process.
In addition to its innovative verification methods, Gensyn also claims to be cost-effective relative to centralized alternatives and cryptocurrency competitors, providing ML training up to 80% cheaper than AWS, while also In terms of testing, it outperforms similar projects such as Truebit.
Bittensor is a decentralized computing protocol built on Substrate that attempts to answer the question "How do we integrate AI into Shift to a collaborative approach?” question. Bittensor aims to achieve decentralization and commoditization of AI generation. Launched in 2021, the protocol hopes to harness the power of collaborative machine learning models to continuously iterate and produce better AI.
Bittensor draws inspiration from Bitcoin, whose native currency TAO has a supply of 21 million and a halving cycle of four years (the first halving will be in 2025). Rather than using proof of work to generate correct random numbers and obtain block rewards, Bittensor relies on "Proof of Intelligence", requiring miners to run models to generate output in response to inference requests.
Incentivized Intelligence
Bittensor initially relied on Mixture of Experts (MoE) models to generate output. When submitting an inference request, the MoE model does not rely on a generalized model, but forwards the inference request to the most accurate model for the given input type. Imagine building a house and you hire various experts to take care of different aspects of the construction process (for example: architects, engineers, painters, construction workers, etc...). MoE applies this to machine learning models, trying to leverage the output of different models depending on the input. As Bittensor founder Ala Shaabana explains, it’s like “talking to a room of smart people and getting the best answers, rather than talking to one person.” Due to challenges in ensuring correct routing, messages synchronized to the correct model, and incentives, this approach has been put on hold until the project is further developed.
There are two main participants in the Bittensor network: validators and miners. Validators are tasked with sending inference requests to miners, reviewing their outputs, and ranking them based on the quality of their responses. To ensure that their rankings are reliable, validators are awarded "vtrust" points based on how consistent their rankings are with other validators' rankings. The higher a validator's vtrust score, the more TAO coins they receive. This is to incentivize validators to agree on a model ranking over time, as the more validators that agree on a ranking, the higher their individual vtrust scores will be.
Miners, also known as servers, are network participants who run the actual machine learning models. Miners compete with each other to provide validators with the most accurate output for a given query, and the more accurate the output, the more TAO earned. Miners can generate these outputs however they want. For example, in the future, it is entirely possible that Bittensor miners have previously trained models on Gensyn and used them to earn TAO.
Today, most interactions occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e. train models). Once validators query miners on the network and receive their responses, they rank the miners and submit their rankings to the network.
This interaction between validators (relying on PoS) and miners (relying on model proof, a form of PoW) is called Yuma consensus. It is designed to incentivize miners to produce the best output to earn TAO, and to incentivize validators to accurately rank miner output to obtain higher vtrust scores and increase their TAO rewards, thereby forming the consensus mechanism of the network.
Subnets and Applications
Interaction on Bittensor mainly consists of validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall intelligence of the network grows, Bittensor will create an application layer on top of its existing stack so that developers can build applications that query the Bittensor network.
In October 2023, Bittensor took a major step towards achieving this goal by introducing subnets with the Revolution upgrade. Subnets are separate networks on Bittensor that incentivize specific behaviors. Revolution opens the network to anyone interested in creating a subnet. In the months since launch, more than 32 subnets have been launched, including subnets for text prompts, data scraping, image generation and storage. As subnets mature and become product-ready, subnet creators will also create application integrations that enable teams to build applications that query specific subnets. Some applications (chatbots, image generators, Twitter reply bots, prediction markets) currently exist, but aside from funding from the Bittensor Foundation, there are no formal incentives for validators to accept and forward these queries.
To provide a clearer explanation, here is an example of how Bittensor might work once the application is integrated into the network.
The subnet is based on the root The performance of the root network evaluation earns TAO. The root network sits on top of all subnets, essentially acting as a special subnet and governed on a stake-by-stake basis by the 64 largest subnet validators. Root network validators rank subnets based on their performance and regularly allocate discharged TAO tokens to subnets. In this way, individual subnets act as miners for the root network.
Bittensor Outlook
Bittensor is still experiencing growing pains as it expands the functionality of the protocol to incentivize the generation of intelligence across multiple subnets. Miners are constantly devising new ways to attack the network to obtain more TAO rewards, such as by slightly modifying the output of a high-rated inference run by their model and then submitting multiple variations. Governance proposals that affect the entire network can only be submitted and implemented by Triumvirate, which is composed entirely of Opentensor Foundation stakeholders (it is important to note that proposals need to be approved by the Bittensor Senate, composed of Bittensor validators, before implementation). The project’s token economics are being modified to increase incentives for TAO usage across subnets. The project has also quickly gained fame for its unique approach, and the CEO of HuggingFace, one of the most popular artificial intelligence websites, said Bittensor should add its resources to the site.
In a recent post by core developers titled "Bittensor Paradigm," the team laid out its vision for Bittensor to eventually become "agnostic to what is being measured." In theory, this could enable Bittensor development subnets to incentivize any type of behavior supported by TAO. Considerable practical limitations remain – most notably, proving that these networks can scale to handle such diverse processes and the potential incentives to drive progress beyond centralized products.
The above section provides a rough overview of the various types of decentralized AI computing protocols being developed. In the early stages of their development and adoption, they provide the foundation of an ecosystem that can ultimately facilitate the creation of “AI building blocks,” such as DeFi’s “money Lego” concept. The composability of permissionless blockchains opens up the possibility for each protocol to be built on top of another to provide a more comprehensive decentralized AI ecosystem.
For example, this is one way Akash, Gensyn, and Bittensor might all interact in response to inference requests.
To be clear, this is only a possibility in the future An example of something, not a representation of the current ecosystem, existing partnerships, or possible outcomes. Interoperability limitations and other considerations described below greatly limit today's integration possibilities. Beyond that, liquidity fragmentation and the need to use multiple tokens can hurt user experience, something both Akash and Bittensor’s founders pointed out.
In addition to computing, several other decentralized infrastructure services have been launched to support the emerging AI ecosystem of cryptocurrencies.
It is beyond the scope of this report to list them all, but some interesting and illustrative examples include:
Ocean: A decentralized data market. Users can create data NFTs that represent their data and can purchase them using data tokens. Users can both monetize and take greater sovereignty over their data, while providing AI teams with access to the data they need to develop and train models.
Grass:A decentralized bandwidth market. Users can sell excess bandwidth to AI companies, which use it to scrape data from the Internet. Grass is built on top of the Wynd Network, which not only enables individuals to monetize their bandwidth, but also provides bandwidth buyers with a more diverse view of what individual users see online (as an individual's internet access typically is specifically customized based on its IP address) ).
HiveMapper:Build a decentralized of mapping products that contain information collected from everyday car drivers. HiveMapper relies on AI to interpret images collected from users’ dashboard cameras and rewards users with tokens for helping fine-tune AI models through Reinforced Human Learning Feedback (RHLF).
Taken together, these point to almost unlimited opportunities to explore decentralized market models that support AI models or develop the surrounding infrastructure required for them. Currently, most of these projects are in the proof-of-concept stage and require more research and development to prove that they can operate at the scale required to deliver full AI services.
Decentralized computing products are still in the early stages of development. They are just beginning to roll out state-of-the-art computing power capable of training the most powerful AI models in production. To gain meaningful market share, they need to demonstrate real advantages over centralized alternatives. Potential triggers for wider adoption include:
< strong>GPU supply/demand. The scarcity of GPUs coupled with rapidly growing computing demands is leading to a GPU arms race. OpenAI already restricted access to its platform due to GPU limitations. Platforms like Akash and Gensyn can provide cost-competitive alternatives for teams requiring high-performance computing. The next 6-12 months represent a particularly unique opportunity for decentralized computing providers to attract new users who are forced to consider decentralized products due to the lack of broader market access. Coupled with the increasing performance of open source models such as Meta's LLaMA 2, users no longer face the same obstacles in deploying effective fine-tuned models, making computing resources a major bottleneck. However, the existence of the platform itself does not ensure adequate computing supply and corresponding demand from consumers. Procuring high-end GPUs remains difficult, and cost isn't always the primary motivator on the demand side. These platforms will be challenged to accumulate sticky users by demonstrating the real benefits of using decentralized computing options (whether due to cost, censorship resistance, uptime and resiliency, or accessibility). They must move quickly. GPU infrastructure investment and construction is occurring at an astonishing pace.
Supervision. Regulation remains an obstacle to the decentralized computing movement. In the short term, the lack of clear regulation means both providers and users face potential risks in using these services. What if a supplier provides calculations or a buyer unknowingly purchases calculations from a sanctioned entity? Users may be hesitant to use decentralized platforms that lack control and oversight from a centralized entity. Protocols attempt to mitigate these concerns by incorporating controls into their platforms or adding filters to access only known computing providers (i.e. providing know-your-customer KYC information), but more robust methods are needed to ensure compliance while protecting privacy. In the short term, we may see the emergence of KYC and compliance platforms that restrict access to their protocols to address these issues. Additionally, discussions surrounding possible new regulatory frameworks in the United States (best exemplified by the issuance of the Executive Order on the Development and Use of Safe, Secure, and Trustworthy Artificial Intelligence) highlight the potential for regulatory action to further restrict access to GPUs.
Review. Regulation works both ways, and decentralized computing products can benefit from actions that limit access to AI. In addition to the executive order, OpenAI founder Sam Altman testified before Congress about the need for regulators to license AI development. Discussions about AI regulation are just beginning, but any such attempts to limit access or censor AI capabilities could accelerate the adoption of decentralized platforms where such barriers do not exist. The November 2023 OpenAI leadership changes (or lack thereof) further illustrate the risks of giving decision-making power over the most powerful existing AI models to a few people. Furthermore, all AI models necessarily reflect the biases of the people who create them, whether intentional or not. One way to eliminate these biases is to make models as open as possible to fine-tuning and training, ensuring that anyone, anywhere can access models of all types and biases.
Data privacy. Decentralized computing may become more attractive than centralized alternatives when integrated with external data and privacy solutions that provide users with data autonomy. Samsung fell victim to this incident when it realized engineers were using ChatGPT to help with chip design and leaked sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX secure enclaves to protect user data, and ongoing research into fully homomorphic encryption could further unlock decentralized computing that ensures privacy. As AI becomes further integrated into our lives, users will place greater value on being able to run models on applications with privacy protection. Users also need services that support data composability so that they can seamlessly port data from one model to another.
User experience (UX). User experience remains a significant barrier to wider adoption of all types of crypto applications and infrastructure. This is no different for decentralized computing products, and in some cases is exacerbated by the need for developers to understand cryptocurrencies and artificial intelligence. Improvements from the basics, such as the login abstraction's interaction with the blockchain, are needed to provide the same high-quality output as the current market leaders. This is obvious given that many operational decentralized computing protocols that offer cheaper products struggle to gain regular use.
Smart contracts are the core building blocks of any blockchain ecosystem. They automate and reduce or eliminate the need for a trusted third party given a specific set of conditions, enabling the creation of complex decentralized applications such as those in DeFi. However, as smart contracts currently exist for the most part, their functionality is still limited as they execute based on preset parameters that must be updated.
For example, a deployed lending protocol smart contract contains specifications for when to liquidate a position based on a specific loan-to-value ratio. While useful in static environments, in dynamic situations where risks are constantly changing, these smart contracts must be constantly updated to adapt to changes in risk tolerance, which creates challenges for contracts that are not managed through a centralized process. For example, a DAO that relies on decentralized governance processes may not be able to react quickly to systemic risks.
Smart contracts that integrate AI (i.e., machine learning models) are one possible way to enhance functionality, security, and efficiency while improving the overall user experience. However, these integrations also bring additional risks, as it is impossible to ensure that the models underpinning these smart contracts cannot be attacked or account for long-tail cases (which make it difficult to train models given the scarcity of data inputs).
Machine learning requires a lot of calculations to run complex models, which makes AI models unable to run directly in smart contracts due to high costs. For example, a DeFi protocol that provides users with a revenue-optimized model will have a difficult time running that model on-chain without paying exorbitant gas fees. One solution is to increase the computing power of the underlying blockchain. However, this also increases the requirements on the chain’s validator set, potentially breaking the decentralized nature. Instead, some projects are exploring using zkML to verify outputs in a trustless manner without the need for intensive on-chain computation.
A common example illustrating the usefulness of zkML is when a user needs someone else to run the data through the model and verify that their counterparty is actually running the correct model. Perhaps developers are using a decentralized computing provider to train their models and are concerned that the provider is trying to cut costs by using a cheaper model with an almost imperceptible difference in output. zkML enables compute providers to run data through their models and then generate proofs that can be verified on-chain to prove that the model output given the input is correct. In this case, model providers would have the added advantage of being able to provide their models without having to reveal the underlying weights that produced the outputs.
You can also do the opposite. If a user wants to run a model using their data but does not want the project providing the model to have access to their data due to privacy concerns (such as in the case of medical examinations or proprietary business information), the user can run it on their data model without sharing the data, and then verify through proofs that they ran the correct model. These possibilities greatly expand the design space for the integration of artificial intelligence and smart contract functionality by solving prohibitive computational limitations.
Given the early state of the zkML field, development is primarily focused on building the infrastructure and tools the team needs to transform their models and output into something that can be used on-chain. Proof of verification. These products abstract the zero-knowledge aspects of development as much as possible.
EZKL and Giza are implemented by providing verifiable proofs of machine learning model execution Two projects to build this tool. Both help teams build machine learning models to ensure those models can be executed on-chain in a way that results can be trusted and verified. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in general-purpose languages like TensorFlow and Pytorch into standard formats. They then output versions of these models that also generate zk proofs when executed. EZKL is open source and produces zk-SNARKS, while Giza is closed source and produces zk-STARKS. Both projects are currently only EVM compatible.
EZKL has made significant progress in enhancing its zkML solution over the past few months, focusing on reducing costs, improving security, and speeding up proof generation. For example, in November 2023, EZKL integrated a new open source GPU library that reduces aggregate proof time by 35%; in January, EZKL released Lilith, a software solution for integrating when using EZKL proofs High-performance computing clusters and orchestrated concurrent job systems. Giza is unique in that, in addition to providing tools for creating verifiable machine learning models, they plan to implement a web3 equivalent of Hugging Face, open up a user market for zkML collaboration and model sharing, and eventually integrate decentralization Compute products. In January, EZKL published a benchmark evaluation comparing the performance of EZKL, Giza, and RiscZero (described below). EZKL demonstrates faster proof times and memory usage.
Modulus Labs is also developing a new zk-proof technology customized for AI models. Modulus published a paper called "The Cost of Intelligence" (implying that the cost of running AI models on the chain is extremely high), which benchmarked the existing zk-proof system at the time. , to determine the capabilities and bottlenecks of improving AI model zk-proofs. The paper, published in January 2023, suggests that existing products are too expensive and inefficient to enable AI applications at scale. Building on the initial research, Modulus launched Remainder in November, a specialized zero-knowledge prover specifically designed to reduce the cost and proof time of AI models, with the goal of making projects economically viable and bringing models to a large scale. Scale integrated into smart contracts. Their work is closed source and therefore cannot be benchmarked against the above solutions, but their work was recently cited in Vitalik's blog post on cryptography and artificial intelligence.
Tools and infrastructure development are critical to the future growth of the zkML space, as it can significantly reduce friction for teams that need to deploy the zk circuits needed to run verifiable off-chain computations. Creating secure interfaces that enable non-crypto-native builders working on machine learning to bring their models on-chain will enable applications to enable greater experimentation with truly novel use cases. The tools also address a major barrier to wider adoption of zkML, which is the lack of knowledgeable developers interested in working at the intersection of zero-knowledge, machine learning, and cryptography.
Other solutions (called "coprocessors") under development include RiscZero , Axiom and Ritual. The term coprocessor is primarily semantic – these networks fulfill many different roles, including on-chain verification of off-chain computations. Like EZKL, Giza, and Modulus, their goal is to completely abstract the zero-knowledge proof generation process, creating essentially a zero-knowledge virtual machine capable of executing off-chain programs and generating on-chain verification proofs. RiscZero and Axiom can serve simple AI models because they are more general-purpose coprocessors, while Ritual is specifically built for use with AI models.
Infernet is the first instance of Ritual and contains an Infernet SDK. Allows developers to submit inference requests to the network and receive outputs and proofs (optional) in return. Infernet nodes receive these requests and process off-chain computations before returning output. For example, a DAO could create a process that ensures all new governance proposals meet certain prerequisites before being submitted. Each time a new proposal is submitted, the governance contract triggers an inference request via Infernet, invoking the DAO-specific governance-trained AI model. The model reviews proposals to ensure all necessary criteria are submitted and returns output and evidence to approve or reject the proposal's submission.
In the next year, the Ritual team plans to launch more features to form an infrastructure layer called the Ritual Super Chain. Many of the projects discussed earlier can be plugged into Ritual as service providers. The Ritual team has integrated with EZKL to generate proofs, and may soon add functionality from other leading providers. Infernet nodes on Ritual can also use Akash or io.net GPUs with query models trained on Bittensor subnets. Their ultimate goal is to become the go-to provider of open AI infrastructure, capable of serving machine learning and other AI-related tasks on any network, for any workload.
zkML helps reconcile the contradiction between blockchain and artificial intelligence. The former is inherently resource-constrained, while the latter requires large amounts of computing and data. As one of Giza’s founders said, “The use cases are very rich… It’s a bit like asking what are the use cases for smart contracts in the early days of Ethereum… All we did was extend the use cases for smart contracts.” However, as mentioned above, today The development mainly occurs at the tool and infrastructure level. The application is still in the exploratory stage, and the challenge for the team is to demonstrate that the value of implementing the model using zkML outweighs its complexity and cost.
Some current applications include:
Decentralized Finance. zkML upgrades the design space of DeFi by enhancing smart contract capabilities. DeFi protocols provide machine learning models with large amounts of verifiable and tamper-proof data that can be used to generate revenue capture or trading strategies, risk analysis, user experience, and more. For example, Giza partnered with Yearn Finance to build a proof-of-concept automated risk assessment engine for Yearn’s new v3 vault. Modulus Labs partnered with Lyra Finance to incorporate machine learning into its AMM, worked with Ion Protocol to implement models that analyze validator risk, and helped Upshot validate its AI-powered NFT price feeds. Protocols such as NOYA (leveraging EZKL) and Mozaic provide access to proprietary off-chain models that give users access to automated liquidity mining while enabling them to verify on-chain data inputs and proofs. Spectral Finance is building an on-chain credit scoring engine to predict the likelihood that Compound or Aave borrowers will default on their loans. These so-called “De-Ai-Fi” products are likely to become even more popular in the coming years thanks to zkML.
Games. Games have long been considered to be subversive and enhanced through public chains. zkML makes on-chain gaming possible with artificial intelligence. Modulus Labs has implemented a proof-of-concept for a simple on-chain game. Leela vs the World is a game theory chess game in which users play against an AI chess model, and zkML verifies that every move Leela makes is based on the model the game is running on. Likewise, the team also used the EZKL framework to build a simple singing competition and an on-chain tic-tac-toe game. Cartridge is using Giza to enable the team to deploy fully on-chain games, most recently focusing on introducing a simple AI driving game where users compete to create better models of cars trying to avoid obstacles. While simple, these proof-of-concepts point to future implementations capable of more complex on-chain verifications, such as complex NPC actors capable of interacting with the in-game economy, as seen in AI Arena, a A Super Smash Bros. game in which players can train their own fighters and then deploy them as AI models to fight.
Identity, traceability and privacy. Cryptocurrencies are already being used as a means to verify authenticity and combat the growing number of AI-generated/manipulated content and deepfakes. zkML can advance these efforts. WorldCoin is an identity proof solution that requires users to scan their iris to generate a unique ID. In the future, biometric IDs could be self-hosted on personal devices using encrypted storage and using the models required to authenticate the biometrics running locally. Users can then provide biometric evidence without revealing their identity, ensuring privacy while being resistant to Sybil attacks. This can also be applied to other corollaries that require privacy, such as using models to analyze medical data/images to detect disease, verify personality and develop matching algorithms in dating apps, or insurance and lending institutions that require verification of financial information.
zkML is still in the experimental stage, with most projects focused on building infrastructure primitives and proofs of concept. Today's challenges include computational cost, memory constraints, model complexity, limited tools and infrastructure, and developer talent. In short, there is considerable work to be done before zkML can be implemented at the scale required for consumer products.
However, as the field matures and these limitations are resolved, zkML will become a key component of AI and cryptography integration. In essence, zkML promises to be able to bring off-chain computation on-chain at any scale while maintaining the same or close to the same security guarantees as running on-chain. However, until this vision is realized, early adopters of the technology will continue to have to weigh the privacy and security of zkML against the efficiency of alternatives.
One of the most exciting integrations of AI and cryptocurrency is the ongoing AI agent experiment. Agents are autonomous robots capable of receiving, interpreting and performing tasks using AI models. This can be anything from having a personal assistant at your fingertips that is fine-tuned to your preferences, to hiring a financial bot that manages and adjusts your portfolio based on your risk appetite.
As cryptocurrencies provide permissionless and trustless payment infrastructure, agents and cryptocurrencies can work well together. After training, agents are given a wallet so that they can conduct transactions using smart contracts on their own. For example, today's simple agents can scrape information on the Internet and then trade on prediction markets based on models.
Morpheusis 2024 One of the latest open source agent projects listed on Ethereum and Arbitrum. Its white paper was published anonymously in September 2023, providing the basis for the formation and building of the community (including famous figures such as Erik Vorhees). The white paper includes a downloadable agent protocol, which is an open source LLM that can be run locally, managed by the user’s wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts are safe to interact with based on criteria such as the number of transactions processed.
The white paper also provides a framework for building the Morpheus network, such as the incentive structure and infrastructure required to make the agent protocol run. This includes incentives for contributors to build front-ends for interacting with agents, APIs for developers to build applications that plug into agents so they can interact with each other, and a cloud that enables users to access the compute and storage needed to run agents The solution lies in edge devices. Initial funding for the project began in early February, with the full agreement expected to launch in the second quarter of 2024.
Decentralized Autonomous Infrastructure Network (DAIN) is a new The agent infrastructure protocol builds an agent-to-agent economy on Solana. The goal of DAIN is to allow agents from different enterprises to seamlessly interact with each other through a common API, thereby greatly opening up the design space for AI agents, with a focus on implementing agents that can interact with web2 and web3 products. In January, DAIN announced its first partnership with Asset Shield to enable users to add “agent signers” to their multisigs, who are able to interpret transactions and approve/reject according to rules set by the user.
Fetch.AI is one of the earliest deployed AI agent protocols. and developed an ecosystem for building, deploying and using agents on-chain using FET tokens and the Fetch.AI wallet. The protocol provides a comprehensive set of tools and applications for working with agents, including in-wallet functionality for interacting with agents and ordering agents.
Autonolas's founders include former members of the Fetch team; An open marketplace for creating and using decentralized AI agents. Autonolas also provides developers with a set of tools to build off-chain managed AI agents that can plug into multiple blockchains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have a number of active agent proof-of-concept products, including for prediction markets and DAO governance.
SingularityNet is building a decentralized market for AI agents, Where people can deploy dedicated AI agents that can be hired by other people or agents to perform complex tasks. Other companies, such as AlteredStateMachine, are building integrations of AI agents with NFTs. Users mint NFTs with random properties that give them strengths and weaknesses on different tasks. These agents can then be trained to enhance certain properties for use in gaming, DeFi, or as virtual assistants and transacting with other users.
Collectively, these projects envision a future ecosystem of agents that work together to not only perform tasks but also help build general AI. Truly complex agents will have the ability to autonomously complete any user task. For example, a fully autonomous agent would be able to figure out how to hire another agent to integrate an API and then perform it, rather than having to ensure that the agent has integrated with an external API (such as a travel booking website) and performed the task before using it. From the user's perspective, there is no need to check whether the agent can complete the task, because the agent can determine it on its own.
July 2023, Lightning Network Lab Introduced a proof-of-concept implementation of using agents on the Lightning Network, called the LangChain Bitcoin Kit. This product is particularly interesting because it aims to solve a growing problem in the Web 2 world - gated and expensive API keys for Web applications.
LangChain solves this problem by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as query API keys and send micropayments. In the traditional payment world, micropayments are costly due to fees, but on the Lightning Network, agents can send unlimited micropayments every day with minimal fees. When combined with LangChain’s L402 payment metering API framework, this allows companies to adjust access fees to their APIs based on increases and decreases in usage, rather than setting a single cost-prohibitive standard.
In the future, where on-chain activity is dominated by agent-to-agent interactions, something like this will be necessary to ensure that agents can interact with each other in a way that is not too costly. This is an early example of how agents can be used on a permissionless and cost-effective payment circuit, opening up possibilities for new markets and economic interactions.
The field of intelligent agents is still in its nascent stage. The project is just starting to roll out functional agents that can handle simple tasks using its infrastructure - something typically only accessible to experienced developers and users. However, over time, one of the biggest impacts of AI agents on cryptocurrencies will be user experience improvements across all verticals. Transactions will begin to shift from click-based to text-based, and users will be able to interact with on-chain agents through large language modalities. Dawn Wallet and other teams have launched a chatbot wallet for users to interact on the chain.
Furthermore, it is unclear how agents would work in Web 2, as the financial sector relies on regulated banking institutions that cannot operate 24/7 and conduct seamless cross-border transactions. As Lyn Alden highlighted, the lack of refunds and the ability to process microtransactions makes the crypto track particularly attractive compared to credit cards. However, if bots become a more common way to transact, existing payment providers and applications are likely to move quickly to implement the infrastructure required to operate in the existing financial sector, thereby undermining some of the use of cryptocurrencies. benefit.
Currently, agents may be limited to deterministic cryptocurrency transactions, where a given input guarantees a given output. Both models specify the ability of these agents to figure out how to perform complex tasks, and tools expand the scope of what they can accomplish, both requiring further development. For crypto-agents to become useful beyond novel on-chain crypto use cases, wider integration and acceptance of crypto as a form of payment as well as regulatory clarity are needed. However, as these components evolve, agents are poised to become one of the largest consumers of the aforementioned decentralized computing and zkML solutions, receiving and solving any task in an autonomous, non-deterministic manner.
AI brings the same innovation to cryptocurrencies that we saw in web2, enhancing everything from infrastructure development to user experience and accessibility. However, the project is still in the early stages of development, and the near future of cryptocurrency and AI integration will be dominated by off-chain integration.
Products like Copilot will "increase 10 times" developer efficiency, and Layer1 and DeFi applications have partnered with major companies such as Microsoft to launch artificial intelligence-assisted development platforms. Companies such as Cub3.ai and Test Machine are developing AI integrations for smart contract auditing and real-time threat monitoring to enhance on-chain security. LLM chatbots are being trained using on-chain data, protocol documents, and applications to provide users with enhanced accessibility and user experience.
For more advanced integrations that truly leverage the underlying technology of cryptocurrencies, the challenge remains to prove that implementing AI solutions on-chain is not only technically feasible, but also economically feasible. The development of decentralized computing, zkML, and AI agents point to promising verticals that lay the foundation for a deeply interconnected future of cryptocurrency and AI.
The U.S.-based stock exchange has submitted an application to launch a Bitcoin fund-based options offering, following in the footsteps of BlackRock.
Cheng YuanAs a narrative that goes beyond Web3, AI’s commercial scale is far more than that. The drastic fluctuations caused by the market are the best time to re-position.
JinseFinanceCEO Mark Zuckerberg said Meta is "taking the next step in making open source AI the industry standard."
JinseFinanceAuthentickator and Smobler have joined forces to bridge the gap between Web 3 and Web 2, making blockchain-based assets and experiences accessible to a broader audience.
JoyThe DeFiLlama team confirmed that no LLAMA token is currently in the works.
cryptopotatoEveryone would agree that the world is becoming increasingly digitized, with the aptly named ‘Web 3.0’ era seemingly just around ...
BitcoinistJPEX launched Bored Ape Yacht Club (BAYC) index, CryptoPunks index, Mutant Ape Yacht Club (MAYC) index before, and recently launched ...
BitcoinistFeaturing diverse, fun blockchain games, G-Link offers a user-friendly experience for Web 2.0 gamers to step into Web 3.0, and ...
Bitcoinist