Fractal Bitcoin: In-depth Research Report
Fractal Bitcoin is the only Bitcoin scaling solution that uses the Bitcoin Core code itself to recursively scale to unlimited levels, built on the world’s most secure and widely held blockchain.
JinseFinanceArticle author: Lucas Tcheyan Article compiler: Block unicorn
Introduction
The emergence of public blockchains is one of the most profound advances in the history of computer science . The development of artificial intelligence will and is already having a profound impact on our world. If blockchain technology provides a new template for transaction settlement, data storage and system design, then artificial intelligence is a revolution in computing, analysis and content delivery. Innovations in these two industries are unlocking new use cases that are likely to accelerate adoption in both industries in the coming years. This report explores the continuous integration of cryptocurrency and artificial intelligence, focusing on novel use cases that seek to bridge the gap between the two and harness the power of both. Specifically, this report examines projects developing decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and artificial intelligence agents.
Cryptocurrency provides a permissionless, trustless and composable settlement layer for artificial intelligence. This unlocks use cases such as making hardware more accessible through decentralized computing systems, building artificial intelligence agents that can perform complex tasks that require value exchange, and developing identity and provenance solutions to combat Sybil attacks and deepfakes. Artificial intelligence brings many of the same benefits to cryptocurrencies that we saw in Web 2. This includes enhancing the user experience (UX) for users and developers through large language models (i.e., specially trained ChatGPT and Copilot), as well as the potential to significantly improve smart contract functionality and automation. Blockchain is the transparent, data-rich environment required for artificial intelligence. But blockchain also has limited computing power, which is a major obstacle to directly integrating AI models.
The driving forces behind ongoing experimentation and eventual adoption at the intersection of cryptocurrency and artificial intelligence are the same ones driving cryptocurrency’s most promising use cases — Access a permissionless and trustless orchestration layer to better facilitate value transfer. Given the huge potential, players in the field need to understand the fundamental ways in which these two technologies intersect.
Key points:
In the near future (6 months to 1 year), the integration of cryptocurrency and AI will be dominated by AI applications that increase developer efficiency, smart contract auditability security and user accessibility. These integrations are not cryptocurrency specific but enhance the on-chain developer and user experience.
Just as there is a serious shortage of high-performance GPUs, decentralized computing products are implementing artificial intelligence customization GPU products, fueling adoption.
User experience and regulation remain barriers to attracting decentralized computing customers. However, recent developments in OpenAI and ongoing regulatory scrutiny in the United States highlight the value proposition of permissionless, censorship-resistant, and decentralized AI networks.
On-chain artificial intelligence integration, especially smart contracts that can use artificial intelligence models, needs improvement zkML technology and other computational methods for validating off-chain computations. A lack of comprehensive tooling and developer talent, as well as high costs, are barriers to adoption.
AI agents are great for cryptocurrencies, and users (or the agents themselves) can create wallets to interact with other services, agents or persons to conduct transactions. This is currently not possible using traditional financial rails. For wider adoption, additional integrations with non-crypto products are required.
Terminology
Artificial intelligence is the use of computing and machines to imitate human reasoning and problem-solving abilities.
Neural network is a training method for artificial intelligence models. They run the input through discrete layers of algorithms, improving it until they produce the desired output. Neural networks consist of equations with weights that can be modified to change the output. They can require large amounts of data and calculations to train so that their output is accurate. This is one of the most common ways to develop artificial intelligence models (ChatGPT uses a neural network process that relies on Transformer).
Training is the process of developing neural networks and other artificial intelligence models. It requires large amounts of data to train the model to correctly interpret the input and produce accurate output. During the training process, the weights of the model equations are continuously modified until satisfactory output is produced. Training can be very expensive. ChatGPT, for example, uses tens of thousands of its own GPUs to process data. Teams with fewer resources often rely on specialized compute providers such as Amazon Web Services, Azure, and Google Cloud providers.
Inference is the actual use of an AI model to obtain an output or result (e.g., using ChatGPT created for a paper on the intersection of cryptocurrency and AI outline). Inference is used throughout the training process and in the final product. They can be expensive to run even after training is complete due to computational cost, but are less computationally intensive than training.
Zero-knowledge proof (ZKP) allows claims to be verified without revealing underlying information. This is useful in cryptocurrencies for two main reasons: 1) privacy and 2) scaling. To protect privacy, this enables users to conduct transactions without revealing sensitive information such as how much ETH is in the wallet. For scaling, it enables off-chain computations to be proven on-chain faster than re-executing computations. This enables blockchains and applications to cheaply run computations off-chain and then verify them on-chain. For more information about zero-knowledge and its role in the Ethereum Virtual Machine, see Christine Kim’s report zkEVMs: The Future of Ethereum Scalability.
Artificial Intelligence/Cryptocurrency Market Map
Projects integrating artificial intelligence and cryptocurrency are still being built to support large-scale on-chain The underlying infrastructure required for artificial intelligence interaction.
The decentralized computing market is emerging to provide the vast amounts of physical hardware required to train and infer artificial intelligence models, primarily in the form of graphics processing units (GPUs) . These two-sided marketplaces connect those leasing and seeking lease calculations, facilitating the transfer of value and verification of calculations. Within decentralized computing, several subcategories are emerging that offer additional functionality. In addition to the two-sided market, this report will examine machine learning training vendors that specialize in providing verifiable training and fine-tuned output, as well as projects dedicated to connecting computation and model generation to achieve artificial intelligence, also often referred to as intelligent incentive networks.
zkML is an emerging area of focus for projects looking to provide verifiable model output on-chain in a cost-effective and timely manner. These projects primarily enable applications to handle heavy computing requests off-chain and then publish verifiable output on-chain, proving that the off-chain workload is complete and accurate. zkML is expensive and time-consuming in current instances, but is increasingly being used as a solution. This is evident in the increasing number of integrations between zkML providers and DeFi/gaming applications that want to leverage AI models.
Sufficient computing supply and the ability to verify on-chain computations open the door to on-chain artificial intelligence agents. Agents are models that are trained to perform requests on behalf of users. Agents offer the opportunity to significantly enhance the on-chain experience, allowing users to perform complex transactions simply by talking to a chatbot. For now, however, the agent project remains focused on developing the infrastructure and tools to enable easy and fast deployment.
Decentralized Computing
Overview< /strong>
Artificial intelligence requires a lot of computing to train models and run inference. Over the past decade, computational demands have grown exponentially as models have become more complex. For example, OpenAI found that from 2012 to 2018, the computational demands of its models went from doubling every two years to doubling every three and a half months. This has led to a surge in demand for GPUs, with some cryptocurrency miners even repurposing their GPUs to provide cloud computing services. As competition for access to computing intensifies and costs rise, several projects are leveraging cryptography to provide decentralized computing solutions. They offer on-demand computing at competitive prices so teams can affordably train and run models. In some cases, the trade-off may be performance and security.
State-of-the-art GPUs, such as those produced by Nvidia, are in high demand. In September, Tether acquired a stake in German Bitcoin miner Northern Data, which reportedly spent $420 million to purchase 10,000 H100 GPUs, one of the most advanced GPUs used for AI training. The wait time to get top-notch hardware can be at least six months, and in many cases even longer. To make matters worse, companies are often required to sign long-term contracts to gain access to computing volumes they may not even use. This may lead to a situation where available computing exists but is not available on the market. Decentralized computing systems help address these market inefficiencies, creating a secondary market where computing owners can sublease their excess capacity at a moment’s notice, thereby freeing up new supply.
Aside from competitive pricing and accessibility, a key value proposition of decentralized computing is censorship resistance. Cutting-edge AI development is increasingly dominated by large tech companies with unparalleled computing and data access. The first key theme highlighted in the 2023 AI Index annual report is that industry is increasingly surpassing academia in the development of artificial intelligence models, concentrating control in the hands of a few technology leaders. This has raised concerns about their ability to have outsized influence in shaping the norms and values that underpin AI models, especially after these tech companies push for regulation to limit the development of AI beyond their control.
Vertical field of decentralized computing
In recent years Several models of decentralized computing have emerged, each with its own emphases and trade-offs.
Generalized Computing
Akash, io.net, iExec, Cudos and other projects are all applications of decentralized computing. In addition to data and general computing solutions, they also provide or will soon provide access to specialized computing for AI training and inference.
Akash is currently the only fully open source "super cloud" platform. It is a proof-of-stake network using the Cosmos SDK. AKT is Akash’s native token and serves as a form of payment to secure the network and incentivize participation. Akash launched its first mainnet in 2020, focusing on providing a permissionless cloud computing marketplace, initially featuring storage and CPU rental services. In June 2023, Akash launched a new testnet focused on GPUs, and launched the GPU mainnet in September, enabling users to rent GPUs for artificial intelligence training and inference.
There are two main players in the Akash ecosystem - tenants and suppliers. Tenants are users of the Akash network who want to purchase computing resources. The supplier is the computing supplier. To match tenants and vendors, Akash relies on a reverse auction process. Tenants submit their compute requirements, where they can specify certain conditions, such as the location of servers or the type of hardware to perform the compute, as well as the amount they are willing to pay. Suppliers then submit their asking prices and the lowest bidder gets the task.
Akash validator maintains the integrity of the network. A set of validators is currently limited to 100, with plans to increase this over time. Anyone can become a validator by staking more AKT than the validator currently staking the smallest amount of AKT. AKT holders can also delegate their AKT to validators. The network’s transaction fees and block rewards are distributed in AKT. Additionally, for each lease, the Akash network earns a “collection fee” at a community-determined rate and distributes it to AKT holders.
Secondary Market
The purpose of the decentralized computing market Filling inefficiencies in existing computing markets. Constraints on supply lead companies to hoard computing resources beyond what they may need, and supply is further constrained because contract structures with cloud providers lock customers into long-term contracts even when ongoing access may not be needed. Decentralized computing platforms unlock new supply, allowing anyone in the world with a computing need to become a supplier.
Whether the surge in GPU demand for AI training will translate into long-term network usage on Akash remains to be seen. Akash, for example, has long provided a marketplace for CPUs, offering services similar to centralized alternatives at a 70-80% discount. However, the lower price did not lead to significant adoption. Active leases on the network have flattened, averaging just 33% compute, 16% memory, and 13% storage as of Q2 2023. While these are impressive metrics for on-chain adoption (for reference, leading storage provider Filecoin had a Q3 2023 storage utilization of 12.6%), this suggests that supply of these products still exceeds demand.
It’s been more than half a year since Akash launched its GPU network, and it’s too early to accurately assess long-term adoption rates. To date, GPU utilization has averaged 44%, higher than CPU, memory, and storage, a sign of demand. This is primarily driven by demand for the highest quality GPUs such as the A100, with over 90% leased.
Akash's daily payout has also increased, almost doubling what it was before the advent of GPUs. This is partly due to increased usage of other services, especially CPU, but is mostly a result of new GPU usage.
Pricing is comparable (or in some cases even more expensive) to centralized competitors like Lambda Cloud and Vast.ai. The huge demand for the highest-end GPUs, such as the H100 and A100, means that most owners of the device have little interest in launching it in a market facing competitive pricing.
While initial profits are positive, there are still barriers to adoption (discussed further below). Decentralized computing networks need to do more to generate demand and supply, and teams are experimenting with how best to attract new users. For example, in early 2024, Akash passed Proposition 240 to increase AKT emissions for GPU suppliers and incentivize more supply, specifically targeting high-end GPUs. The team is also working on launching a proof-of-concept model to demonstrate the real-time capabilities of its network to potential users. Akash is training their own base models and has launched chatbot and image generation products that can create output using Akash GPUs. Likewise, io.net has developed a stable diffusion model and is rolling out new network features to better emulate the performance and scale of the network.
Decentralized machine learning training
In addition to meeting In addition to the general computing platform required by artificial intelligence, a group of professional artificial intelligence GPU suppliers focusing on machine learning model training are also emerging. Gensyn, for example, is "coordinating power and hardware to build collective intelligence," with the idea that "if someone wants to train something, and someone is willing to train it, then that training should be allowed to happen."
< p style="text-align: left;">The protocol has four main participants: committers, solvers, verifiers, and whistleblowers. Submitters submit tasks with training requests to the network. These tasks include the training objectives, the model to be trained, and the training data. As part of the submission process, submitters are required to pay upfront for the estimated computational effort required by the solver.After submission, the task will be assigned to the solver who will actually train the model. The solver then submits the completed task to the verifier, who is responsible for checking the training to ensure it was completed correctly. It is the responsibility of the whistleblower to ensure that validators act honestly. To incentivize whistleblowers to participate in the network, Gensyn plans to regularly provide evidence of intentional errors, rewarding whistleblowers for catching them.
In addition to providing compute for AI-related workloads, Gensyn's key value proposition is its verification system, which is still under development. Validation is necessary to ensure that the GPU provider's external computations are performed correctly (i.e., to ensure that the user's model is trained the way they want). Gensyn takes a unique approach to solving this problem, leveraging novel verification methods called “probabilistic learning proofs, graph-based precision protocols, and Truebit-style incentive games.” This is an optimistic solving mode that allows the verifier to confirm that the solver has run the model correctly without having to completely rerun the model themselves, which is a costly and inefficient process.
In addition to its innovative verification methods, Gensyn also claims to be cost-effective relative to centralized alternatives and cryptocurrency competitors - ML training provided The price is up to 80% cheaper than AWS, and it outperforms similar projects such as Truebit in testing.
Whether these preliminary results can be used in a decentralized network Large-scale replication remains to be seen. Gensyn hopes to leverage excess computing power from providers such as small data centers, retail users and future small mobile devices such as cell phones. However, as the Gensyn team themselves admit, relying on heterogeneous computing vendors poses some new challenges.
For centralized providers like Google Cloud and Coreweave, computation is expensive, while communication (bandwidth and latency) between computations is cheap. These systems are designed to enable communication between hardware as quickly as possible. Gensyn turns this framework on its head, lowering compute costs by allowing anyone in the world to provision a GPU, but it also increases communication costs because the network must now coordinate compute jobs on heterogeneous hardware that is widely separated. Gensyn isn’t out yet, but it’s a proof of concept of what’s possible when building decentralized machine learning training protocols.
Decentralized general intelligence
Decentralized computing The platform also provides possibilities for the design of artificial intelligence creation methods. Bittensor is a decentralized computing protocol built on Substrate that attempts to answer "How do we turn artificial intelligence into a collaborative approach?". Bittensor aims to decentralize and commoditize artificial intelligence generation. Launching in 2021, the protocol hopes to harness the power of collaborative machine learning models to continuously iterate and produce better artificial intelligence.
Bittensor draws inspiration from Bitcoin. The supply of its native currency TAO is 21 million, and the halving cycle is four years (the first halving will be on 2025). Rather than using proof-of-work to generate correct random numbers and receive block rewards, Bittensor relies on "smart proofs" that require miners to run models that generate outputs in response to inference requests.
Incentivized Intelligence
Bittensor initially relied on Mix of Experts (MoE ) model to generate output. When submitting an inference request, the MoE model does not rely on a generalized model, but forwards the inference request to the most accurate model for the given input type. Imagine building a house and you hire various experts to take care of different aspects of the construction process (for example: architects, engineers, painters, construction workers, etc...). MoE applies this to machine learning models, trying to leverage the output of different models depending on the input. As Bittensor founder Ala Shaabana explains, it’s like “talking to a room of smart people and getting the best answers, rather than talking to one person.” Due to challenges in ensuring correct routing, messages synchronized to the correct model, and incentives, this approach has been put on hold until the project is further developed.
There are two main participants in the Bittensor network: validators and miners. Validators are tasked with sending inference requests to miners, reviewing their outputs, and ranking them based on the quality of their responses. To ensure that their rankings are reliable, validators are awarded "vtrust" points based on how consistent their rankings are with other validators' rankings. The higher a validator's vtrust score, the more TAO emissions they receive. This is to incentivize validators to agree on a model ranking over time, as the more validators that agree on a ranking, the higher their individual vtrust scores will be.
Miners, also known as servers, are network participants who run actual machine learning models. Miners compete with each other to provide validators with the most accurate output for a given query, with the more accurate the output, the more TAO emissions earned. Miners can generate these outputs however they wish. For example, it is entirely possible that in the future, Bittensor miners have previously trained models on Gensyn and used them to earn TAO emissions.
Today, most interactions occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e. train models). Once validators query miners on the network and receive their responses, they rank the miners and submit their rankings to the network.
This interaction between validators (relying on PoS) and miners (relying on model proof, a form of PoW) is called Yuma consensus. It is designed to incentivize miners to produce the best output to earn TAO emissions, and to incentivize validators to accurately rank miner outputs to obtain higher vtrust scores and increase their TAO rewards, thus forming the consensus mechanism of the network.
Subnets and Applications
Interaction on Bittensor Mainly involves validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall intelligence of the network grows, Bittensor will create an application layer on top of its existing stack so that developers can build applications that query the Bittensor network.
In October 2023, Bittensor took a major step toward achieving this goal by introducing subnets with the Revolution upgrade. Subnets are separate networks on Bittensor that incentivize specific behaviors. Revolution opens the network to anyone interested in creating a subnet. In the months since launch, more than 32 subnets have been launched, including those for text prompts, data scraping, image generation and storage. As subnets mature and become product-ready, subnet creators will also create application integrations that enable teams to build applications that query specific subnets. Some applications (e.g. chatbots, image generators, tweet reply bots, prediction markets) do currently exist, but aside from funding from the Bittensor Foundation, there are no formal incentives for validators to accept and forward these queries.
To provide a clearer explanation, here is an example of how a Bittensor might work when an application is integrated into the network.
Subnets earn TAO based on the performance evaluated by the root network. The root network sits on top of all subnets, essentially acting as a special subnet and governed on a stake-by-stake basis by the 64 largest subnet validators. The Root Network Validator ranks subnets based on their performance and periodically allocates TAO emissions to subnets. In this way, individual subnets act as miners for the root network.
Bittensor ’s Outlook
Bittensor is still experiencing growth annoyance because it extends the functionality of the protocol to incentivize intelligent generation across multiple subnets. Miners are constantly devising new ways to attack the network to gain more TAO rewards, such as by slightly modifying the output of a high-rated inference run by their model and then submitting multiple variations. Governance proposals that affect the entire network can only be submitted and implemented by Triumvirate, which is composed entirely of Opentensor Foundation stakeholders (it is worth noting that proposals need to be approved by the Bittensor Senate, composed of Bittensor validators, before implementation). The project’s token economics are being modified to increase incentives for TAO usage across subnets. The project also quickly gained notoriety for its unique approach, with the CEO of one of the most popular AI websites, HuggingFace, saying that Bittensor should add its resources to the site.
In a recent post by core developers titled "Bittensor Paradigm", the team laid out the vision for Bittensor to eventually evolve into a What is being measured is not known”. In theory, this could allow Bittensor to develop subnets to incentivize any type of behavior powered by TAO. Considerable practical limitations remain – most notably, proving that these networks can scale to handle such diverse processes and the potential incentives to drive progress beyond centralized products.
Building a decentralized computing stack for artificial intelligence models
The above sections provide an in-depth overview of the various types of decentralized artificial intelligence computing protocols being developed. In the early stages of development and adoption, they provide the foundation of an ecosystem that can ultimately facilitate the creation of “artificial intelligence building blocks,” such as DeFi’s “money Lego” concept. The composability of permissionless blockchains opens up the possibility for each protocol to be built on top of another to provide a more comprehensive decentralized AI ecosystem.
For example, this is one way Akash, Gensyn, and Bittensor might all interact in response to inference requests.
To be clear, this is just an example of what may happen in the future and is not representative of the current ecosystem, existing partnerships, or possible outcomes. Interoperability limitations and other considerations described below greatly limit today's integration possibilities. Beyond that, liquidity fragmentation and the need to use multiple tokens can hurt the user experience, something both Akash and Bittensor’s founders pointed out.
Other decentralized products
Besides computing , has also launched several other decentralized infrastructure services to support cryptocurrency’s emerging artificial intelligence ecosystem. Listing them all is beyond the scope of this report, but some interesting and illustrative examples include:
Ocean:A decentralized data marketplace. Users can create data NFTs that represent their data and can purchase them using data tokens. Users can both monetize and have greater sovereignty over their data, while providing AI teams with access to the data they need to develop and train models.
Grass:Go Centralized bandwidth market. Users can sell excess bandwidth to artificial intelligence companies, which use it to scrape data from the Internet. Built on the Wynd Network, this not only enables individuals to monetize their bandwidth, but also provides bandwidth buyers with a more diverse view of what individual users see online (as an individual's internet access is often Specifically customized according to its IP address)).
HiveMapper:Build A decentralized mapping product containing information collected from everyday car drivers. HiveMapper relies on AI to interpret images collected from users’ dashboard cameras and rewards users with tokens for helping fine-tune AI models through Reinforced Human Learning Feedback (RHLF).
Overall, these all point to the exploration of decentralized markets that support artificial intelligence models Virtually unlimited opportunities to model or develop the surrounding infrastructure required for them. Currently, most of these projects are in the proof-of-concept stage and require more research and development to prove that they can operate at the scale required to deliver full AI services.
Outlook
Decentralized computing products are still under development early stage. They are just beginning to roll out state-of-the-art computing capabilities capable of training the most powerful AI models in production. To gain meaningful market share, they need to demonstrate real advantages over centralized alternatives. Potential triggers for wider adoption include:
GPU supply/demand. The scarcity of GPUs coupled with rapidly growing computing demands is leading to a GPU arms race. OpenAI already restricted access to its platform due to GPU limitations. Platforms like Akash and Gensyn can provide cost-competitive alternatives for teams requiring high-performance computing. The next 6-12 months represent a particularly unique opportunity for decentralized computing vendors to attract new users who are forced to consider decentralized products due to the lack of broader market access. Coupled with the increasing performance of open source models such as Meta's LLaMA2, users no longer face the same obstacles in deploying effective fine-tuned models, making computing resources a major bottleneck. However, the existence of the platform itself does not ensure adequate computing supply and corresponding demand from consumers. Procuring high-end GPUs remains difficult, and cost isn't always the primary motivator on the demand side. These platforms will be challenged to accumulate sticky users by demonstrating the real benefits of using decentralized computing options (whether due to cost, censorship resistance, uptime and resiliency, or accessibility). They must move quickly. GPU infrastructure investment and construction is occurring at an astonishing pace.
Provisions. Regulation remains a headwind for the decentralized computing movement. In the short term, the lack of clear regulation means both providers and users face potential risks in using these services. What if a supplier provides calculations or a buyer unknowingly purchases calculations from a sanctioned entity? Users may be hesitant to use decentralized platforms that lack control and oversight from a centralized entity. Protocols attempt to mitigate these concerns by incorporating controls into their platforms or adding filters to only access known computing providers (i.e. provide know-your-customer (KYC) information), but stronger approaches are needed to protect privacy while Ensure compliance. In the short term, we may see the emergence of KYC and compliance platforms that restrict access to their protocols to address these issues. Additionally, discussions surrounding a possible new regulatory framework in the United States, best exemplified by the issuance of the Executive Order on the Development and Use of Safe, Secure, and Trustworthy Artificial Intelligence, highlight the potential for regulatory action to further restrict access to GPUs.
Censorship. Regulation works both ways, and decentralized computing products can benefit from actions that limit access to artificial intelligence. In addition to the executive order, OpenAI founder Sam Altman testified before Congress about the need for regulators to issue licenses for AI development. Discussions about AI regulation are just beginning, but any such attempts to limit access or censor AI capabilities could accelerate the adoption of decentralized platforms where such barriers do not exist. November’s OpenAI leadership changes (or lack thereof) are further evidence of the risks of giving decision-making power over the most powerful existing AI models to a few people. Furthermore, all AI models necessarily reflect the biases of the people who create them, whether intentional or not. One way to eliminate these biases is to make models as open as possible to fine-tuning and training, ensuring that anyone, anywhere can access models of all types and biases.
Data Privacy. When integrated with external data and privacy solutions that provide users with data autonomy, decentralized computing may become more attractive than centralized alternatives. Samsung fell victim when it realized engineers were using ChatGPT to help with chip design and leaking sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX secure enclaves to protect user data, and ongoing research into fully homomorphic encryption could further unlock decentralized computing that ensures privacy. As AI becomes further integrated into our lives, users will place greater value on being able to run models on privacy-preserving applications. Users also need services that support data composability so that they can seamlessly port data from one model to another.
User experience (UX). User experience remains a significant barrier to wider adoption of all types of crypto applications and infrastructure. This is no different for decentralized computing products, and in some cases is exacerbated by the need for developers to understand cryptocurrencies and artificial intelligence. Improvements from the basics, such as joining and extracting interactions with the blockchain, are needed to provide the same high-quality output as the current market leaders. This is obvious given that many operational decentralized computing protocols that offer cheaper products struggle to gain regular use.
Smart contracts and zkML
Smart contracts are the core building blocks of any blockchain ecosystem. They automate and reduce or eliminate the need for a trusted third party given a specific set of conditions, enabling the creation of complex decentralized applications such as those in DeFi. However, the functionality of smart contracts is still limited because they execute based on preset parameters that must be updated.
For example, deploy a smart contract for a lend/borrow protocol that contains specifications for when a position will be liquidated based on a specific loan-to-value ratio. While useful in static environments, in dynamic situations where risks are constantly changing, these smart contracts must be constantly updated to adapt to changes in risk tolerance, which creates challenges for contracts that are not managed through a centralized process. For example, a DAO that relies on decentralized governance processes may not be able to react quickly to systemic risks.
Smart contracts that integrate artificial intelligence (i.e., machine learning models) are one possible way to enhance functionality, security, and efficiency while improving the overall user experience. However, these integrations also bring additional risks, as it is impossible to ensure that the models underpinning these smart contracts will not be exploited or account for long-tail situations (which make it difficult to train models given the scarcity of data inputs).
Zero-knowledge machine learning (zkML)
Machine learning A large amount of computation is required to run complex models, which makes artificial intelligence models unable to be run directly in smart contracts due to their high cost. For example, a DeFi protocol that provides users with a revenue optimization model will have a difficult time running the model on the chain without paying exorbitant gas fees. One solution is to increase the computing power of the underlying blockchain. However, this also increases the requirements on the chain validator set, potentially breaking the decentralization nature. Instead, some projects are exploring using zkML to verify outputs in a trustless manner without the need for intensive on-chain computation.
A common example illustrating the usefulness of zkML is when a user needs someone else to run data through a model and verify that their counterparty is actually running the correct model. Perhaps developers are using a decentralized computing provider to train their models and are concerned that the provider is trying to cut costs by using a cheaper model with an almost imperceptible difference in output. zkML enables compute providers to run data through their models and then generate proofs that can be verified on-chain to prove that the model output given the input is correct. In this case, model providers would have the added advantage of being able to provide their models without having to reveal the underlying weights that produced the outputs.
You can also do the opposite. If a user wants to run a model using their data, but does not want the project providing the model to have access to their data due to privacy concerns (i.e., in the case of medical examinations or proprietary business information), the user can run a model on their data Run the model without sharing the data, then verify they ran the correct model with a proof. These possibilities greatly expand the design space for the integration of artificial intelligence and smart contract functionality by addressing prohibitive computational limitations.
Infrastructure and Tools
Given the early days of the zkML field State, development is primarily focused on building the infrastructure and tools the team needs to transform its models and outputs into proofs that can be verified on-chain. These products distill as much zero-knowledge development as possible.
EZKL and Giza are two projects building this tool by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure those models perform in a form where the results can be trusted and verified on-chain. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in general-purpose languages like TensorFlow and Pytorch into standard formats. They then output versions of these models that also generate zk proofs when executed. EZKL is open source and produces zk-SNARKS, while Giza is closed source and produces zk-STARKS. Both projects are currently only EVM compatible.
EZKL has made significant progress in enhancing its zkML solution over the past few months, with a primary focus on reducing costs, improving security, and speeding up proof generation. For example, in November 2023, EZKL integrated a new open source GPU library that reduces aggregate proof time by 35%; in January, EZKL released Lilith, a software solution for integrating when using EZKL proofs High-performance computing clusters and orchestrated concurrent job systems. Giza is unique in that, in addition to providing tools for creating verifiable machine learning models, they plan to implement a web3 equivalent of Hugging Face, open a user marketplace for zkML collaboration and model sharing, and eventually integrate decentralized computing product. In January, EZKL published a benchmark evaluation comparing the performance of EZKL, Giza, and RiscZero (described below). EZKL demonstrates faster proof times and memory usage.
Modulus Labs is also developing a model specifically designed for AI Customized new zk proof technology. Modulus published a paper called "The Cost of Intelligence" (implying that the cost of running AI models on the chain is extremely high), which benchmarked the existing zk proof system at the time to determine how to improve zk proof in AI models. capabilities and bottlenecks. The paper, published in January 2023, suggests that existing products are too expensive and inefficient to enable AI applications at scale. Building on the initial research, Modulus launched Remainder in November, a specialized zero-knowledge prover specifically designed to reduce the cost and proof time of AI models, with the goal of making projects economically viable and converting models to Integrate into smart contracts at scale. Their work is closed source and therefore cannot be benchmarked against the above solutions, but their work was recently cited in Vitalik's blog post on crypto and artificial intelligence.
Tools and infrastructure development are critical to the future growth of the zkML space, as it can significantly reduce the need to deploy and run verifiable off-chain computations The zk team's line friction. Creating secure interfaces that enable non-crypto-native builders working on machine learning to bring their models on-chain will enable applications to enable greater experimentation with truly novel use cases. The tools also address a major barrier to wider adoption of zkML, which is the lack of knowledgeable developers interested in working at the intersection of zero-knowledge, machine learning, and cryptography.
Coprocessor
Other solutions under development (called "coprocessors") include RiscZero, Axiom, and Ritual. The term coprocessor is primarily semantic – these networks fulfill many different roles, including validating off-chain computations on-chain. Like EZKL, Giza, and Modulus, their goal is to completely abstract the process of zero-knowledge proof generation, creating essentially a zero-knowledge virtual machine capable of executing off-chain programs and generating on-chain verification proofs. RiscZero and Axiom can serve simple AI models because they are more general-purpose coprocessors, while Ritual is specifically built for use with AI models.
Infernet is the first instance of Ritual and includes an Infernet SDK that allows developers to add Submit an inference request and receive output and proof (optional) in return. Infernet Nodes receive these requests and process off-chain computations before returning output. For example, a DAO could create a process that ensures all new governance proposals meet certain prerequisites before being submitted. Each time a new proposal is submitted, the governance contract triggers an inference request via Infernet, invoking the DAO-specific governance-trained AI model. The model reviews proposals to ensure all necessary criteria are submitted and returns output and evidence to approve or reject the proposal's submission.
In the next year, the Ritual team plans to launch more features to form an infrastructure layer called the Ritual Super Chain. Many of the projects discussed earlier can be plugged into Ritual as service providers. The Ritual team has integrated with EZKL to generate proofs, and may soon add functionality from other leading vendors. Infernet nodes on Ritual can also use Akash or io.net GPUs with query models trained on Bittensor subnets. Their ultimate goal is to become the provider of choice for open AI infrastructure, capable of serving machine learning and other AI-related tasks on any network, for any workload.
Application
zkML helps reconcile blocks The contradiction between blockchain and artificial intelligence is that the former is inherently resource-limited, while the latter requires a large amount of computing and data. As one of Giza’s founders said, “The use cases are very rich… It’s a bit like the early days of Ethereum asking what are the use cases for smart contracts… all we did was extend the use cases for smart contracts.” However, as mentioned above, today The development mainly occurs at the tool and infrastructure level. The application is still in the exploratory stage, and the challenge for the team is to demonstrate that the value of implementing the model using zkML outweighs its complexity and cost.
Some current applications include:
Decentralized Finance. zkML upgrades the design space of DeFi by enhancing the capabilities of smart contracts. DeFi protocols provide large amounts of verifiable and immutable data for machine learning models, which can be used to generate returns or trading strategies, risk analysis, user experience, and more. For example, Giza partnered with Yearn Finance to build a proof-of-concept automated risk assessment engine for Yearn’s new v3 vault. Modulus Labs partnered with Lyra Finance to incorporate machine learning into its AMM, worked with Ion Protocol to implement models that analyze validator risk, and helped Upshot validate its AI-powered NFT price feeds. Protocols such as NOYA (leveraging EZKL) and Mozaic provide access to proprietary off-chain models that give users access to automated liquidity mining while enabling them to verify on-chain data inputs and proofs. Spectral Finance is building an on-chain credit scoring engine to predict the likelihood that Compound or Aave borrowers will default on their loans. These so-called “De-Ai-Fi” products are likely to become even more popular in the coming years thanks to zkML.
Game. Gaming has long been thought to be disrupted and enhanced by public blockchains. zkML makes on-chain gaming possible with artificial intelligence. Modulus Labs has implemented a proof-of-concept for a simple on-chain game. Leela vs the World is a game theory chess game where users play against an AI chess model, and zkML verifies that every move Leela makes is based on the model running the game. Likewise, the team also used the EZKL framework to build simple singing contests and on-chain tic-tac-toe. Cartridge is using Giza to enable teams to deploy fully on-chain games, recently highlighting a simple AI driving game where users compete to create better models of cars trying to avoid obstacles. While simple, these proof-of-concepts point to future implementations capable of more complex on-chain verifications, such as complex NPC actors capable of interacting with the in-game economy, as seen in AI Arena, a super Smash Bros. is a game in which players can train their own warriors and then deploy them into battle as artificial intelligence models.
Identity, provenance and privacy. Cryptocurrencies are already being used as a means to verify authenticity and combat the growing number of AI-generated/manipulated content and deepfakes. zkML can advance these efforts. WorldCoin is an identity proof solution that requires users to scan their iris to generate a unique ID. In the future, biometric IDs could be self-hosted on personal devices using encrypted storage and using the models required to authenticate the biometrics running locally. Users can then provide biometric evidence without revealing their identity, ensuring privacy while being resistant to Sybil attacks. This can also be applied to other corollaries that require privacy, such as using models to analyze medical data/images to detect disease, verify personality and develop matching algorithms in dating apps, or insurance and lending institutions that require verification of financial information.
Outlook
zkML is still in the experimental stage, with most projects focused on building infrastructure primitives and proofs of concept. Today's challenges include computational cost, memory constraints, model complexity, limited tools and infrastructure, and developer talent. In short, there is considerable work to be done before zkML can be implemented at the scale required for consumer products.
However, as the field matures and these limitations are addressed, zkML will become a key component of AI and cryptography integration. In essence, zkML promises to be able to bring off-chain computation on-chain at any scale while maintaining the same or close to the same security guarantees as running on-chain. However, until this vision is realized, early adopters of the technology will continue to have to weigh the privacy and security of zkML against the efficiency of alternatives.
Artificial Intelligence Agent
Artificial intelligence and cryptocurrency are the most One of the exciting integrations is the ongoing experiment with artificial intelligence agents. Agents are autonomous robots capable of receiving, interpreting and performing tasks using artificial intelligence models. This can be anything from having a personal assistant always available to fine-tune your preferences, to hiring a financial agent to manage and adjust your portfolio based on your risk appetite.
As cryptocurrencies provide permissionless and trustless payment infrastructure, proxies and cryptocurrencies can work well together. After training, agents are given a wallet so that they can conduct transactions using smart contracts on their own. For example, today’s agents can scrape information on the internet and then trade on prediction markets based on models.
Proxy Provider
Morpheus is the 2024 Ethereum One of the latest open source proxy projects launched on Fang and Arbitrum. Its whitepaper, published anonymously in September 2023, provided the basis for the formation and building of the community (including prominent figures such as Erik Vorhees). The white paper includes a downloadable smart agent protocol, which is an open source LLM that can be run locally, managed by the user’s wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts are safe to interact with based on criteria such as the number of transactions processed.
The white paper also provides a framework for building the Morpheus network, such as the incentive structure and infrastructure required to make the smart agent protocol operate. This includes incentivizing contributors to build front-ends for interacting with agents, APIs for developers to build applications that plug into agents so they can interact with each other, and enabling users to access the compute and storage required to run agents and on edge devices. Cloud solutions. Initial funding for the project began in early February, with the full agreement expected to launch in the second quarter of 2024.
The Decentralized Autonomous Infrastructure Network (DAIN) is a new agent infrastructure protocol that builds an agent-to-agent economy on Solana. The goal of DAIN is to allow agents from different enterprises to interact with each other seamlessly through a common API, thereby greatly opening up the design space for AI agents, with a focus on implementing agents that can interact with web2 and web3 products. In January, DAIN announced its first partnership with Asset Shield to enable users to add "proxy signers" to their multisigs, who are able to interpret transactions and approve/reject according to rules set by the user.
Fetch.AI is one of the first deployed AI proxy protocols and has developed an ecosystem for using FET tokens and Fetch.AI wallets on-chain Build, deploy and use agents on. The protocol provides a comprehensive set of tools and applications for working with proxies, including in-wallet functionality for interacting with and ordering proxies.
Autonolas, whose founders include former members of the Fetch team, is an open marketplace for creating and using decentralized artificial intelligence agents. Autonolas also provides developers with a set of tools to build off-chain hosted AI agents that can plug into multiple blockchains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have a number of active proxy proof-of-concept products, including for prediction markets and DAO governance.
SingularityNet is building a decentralized marketplace for AI agents where people can deploy dedicated AI agents that can be used by other people or agents Hire to perform complex tasks. Other companies, such as AlteredStateMachine, are building integrations of AI agents with NFTs. Users mint NFTs with random properties that give them strengths and weaknesses on different tasks. These agents can then be trained to enhance certain properties for use in gaming, DeFi, or as virtual assistants and transacting with other users.
Collectively, these projects envision a future ecosystem of agents that work together to not only perform tasks but also help build General artificial intelligence. A truly sophisticated agent will have the ability to autonomously complete any user task. For example, a fully autonomous agent would be able to figure out how to hire another agent to integrate an API and then perform the task, rather than having to ensure that the agent has integrated with an external API (such as a travel booking website) before using it. From the user's perspective, there is no need to check whether the agent can complete the task, since the agent can determine that on its own.
Bitcoin and Artificial Intelligence Agents
2023 7 In March, Lightning Labs launched a proof-of-concept implementation of a proxy for use on the Lightning Network, a Bitcoin suite called LangChain. This product is particularly interesting because it aims to solve a growing problem in the Web 2 world - gated and expensive API keys for Web applications.
LangChain solves this problem by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as query API keys and send Micropayments. On traditional payment rails, micropayments are costly due to fees, whereas on the Lightning Network, agents can send unlimited micropayments every day with minimal fees. When used in conjunction with LangChain’s L402 payment metering API framework, companies can adjust access fees to their APIs based on increases and decreases in usage, rather than setting a single costly standard.
In the future, on-chain activities will be dominated by agent-to-agent interactions, and such things will be necessary to ensure that agents can Overly high way to interact with each other. This is an early example of how agents can be used on a permissionless and cost-effective payment rail, opening up possibilities for new markets and economic interactions.
Outlook
The agency field is still in its nascent stage. The project is just starting to roll out functional agents that can handle simple tasks using its infrastructure - something typically only accessible to experienced developers and users. Over time, however, one of the biggest impacts of AI agents on cryptocurrencies will be user experience improvements across all verticals. Transactions will begin to shift from click-based to text-based, with users able to interact with on-chain agents via LLM. Teams such as Dawn Wallet have launched chatbot wallets for users to interact on-chain.
Also, it is unclear how agents would work in Web 2, as Financial Rail relies on regulated banking institutions that are not available 24 hours a day operations, and cannot conduct seamless cross-border transactions. As Lyn Alden highlights, the lack of refunds and the ability to process microtransactions makes crypto rails particularly attractive compared to credit cards. However, if agents become a more common method of transactions, existing payment providers and applications will likely move quickly to implement the infrastructure required to operate on existing financial rails, thereby undermining some of the use of cryptocurrencies. benefit.
Currently, agents may be limited to deterministic cryptocurrency transactions, where a given input guarantees a given output. Both models specify the ability of these agents to figure out how to perform complex tasks, and tools expand the scope of what they can accomplish, both requiring further development. For crypto proxies to become useful outside of novel on-chain crypto use cases, wider integration and acceptance of crypto as a form of payment as well as regulatory clarity are needed. However, as these components evolve, agents are poised to become one of the largest consumers of the aforementioned decentralized computing and zkML solutions, receiving and solving any task in an autonomous, non-deterministic manner.
Conclusion
AI has introduced us to cryptocurrency The same innovations seen in web2, enhancing everything from infrastructure development to user experience and accessibility. However, the project is still in its early stages of development, and the near future of cryptocurrency and AI integration will be dominated by off-chain integration.
Products like Copilot will "increase 10 times" developer efficiency, and Layer 1 and DeFi applications have partnered with major companies such as Microsoft to launch artificial intelligence assistance Development Platform. Companies like Cub3.ai and Test Machine are developing AI integrations for smart contract auditing and real-time threat monitoring to enhance on-chain security. LLM chatbots are being trained using on-chain data, protocol documents, and applications to provide users with enhanced accessibility and user experience.
For more advanced integrations that truly leverage the underlying technology of cryptocurrencies, the challenge remains to prove that implementing AI solutions on-chain is technically feasible, It is also economically feasible. Developments in decentralized computing, zkML, and AI agents point to promising verticals that set the stage for a deeply connected future of cryptocurrency and AI.
Fractal Bitcoin is the only Bitcoin scaling solution that uses the Bitcoin Core code itself to recursively scale to unlimited levels, built on the world’s most secure and widely held blockchain.
JinseFinanceUnderstand the scalability limits of Rollup and the decision making options for maximizing it.
JinseFinanceEverything bad that governments do is only because they control the money supply.
JinseFinanceThe Bitcoin Asia conference will be held at Kai Tak Cruise Terminal in Hong Kong from May 9 to May 10, 2024. This conference will bring together many industry leaders. Golden Finance has compiled a conference guide to help you attend the conference with peace of mind.
JinseFinanceThe current status of the Bitcoin ecology industry, my views on the Layer 2 definition proposed by Bitcoin Magazine, and my own evaluation method for Bitcoin Layer 2.
JinseFinanceLayer 2 of general computing smart contracts on Bitcoin has always been a problem because the Bitcoin network cannot be relied on to ensure the security of smart contracts.
JinseFinance自 2023 年年初 Ordinals 开启 Bitcoin 的 NFT 试验以来,如何在 Bitcoin 上创立丰富的去中心化用例项目,成为行业关注的热点。
MarsBitThe expansion may not necessarily include Mercado Bitcoin acquiring a Mexico-based crypto exchange to operate in the country.
CointelegraphBitcoin SV joins the hard fork in posting dismal price performance as the dust settles on the Terra UST debacle.
CointelegraphThe cryptocurrency exchange previously announced the acquisition of blockchain infrastructure platform Bison Trails and the Routefire platform.
Cointelegraph