Hong Kong Bitcoin ETFs Poised to Draw $25 Billion via Stock Connect
Matrixport sees up to $25 billion potentially channelled into new Bitcoin ETFs in Hong Kong through the Stock Connect, pending regulatory approval.
BrianAuthor: Patrick Bush, Matthew Sigel
Source: VanEck
Translation: Shan Ouba, Golden Finance
This article outlines potential revenue scenarios for AI cryptocurrencies through 2030, using a $10.2 billion baseline forecast and highlighting public blockchains An important role to play in driving AI adoption through key capabilities. Please note that VanEck may hold positions in the digital assets described below.
Key points:
In our baseline forecast, cryptocurrency AI revenue is expected to reach $10.2 billion by 2030.
Blockchain technology may become a key driver in the adoption of artificial intelligence and the advancement of decentralized artificial intelligence solutions.
Integration with cryptographic incentives can improve the security and efficiency of artificial intelligence models.
Blockchain could be a solution to AI authentication and data integrity challenges.
Public blockchain will most likely be the key to unlocking the widespread adoption of artificial intelligence (AI), and AI applications will become crypto The meaning of currency existence. This is because cryptocurrencies provide important foundational elements required for artificial intelligence, such as transparency, immutability, well-defined ownership properties, and adversarial testing environments. We believe these properties will prove critical for AI to reach its full potential. Based on estimates of AI growth, we believe that by 2030, the annual revenue baseline forecast collected by AI-focused crypto projects is $10.2 billion. In this article, we speculate on the role cryptocurrencies will play in promoting AI adoption and the value cryptocurrencies will derive from AI businesses.
We found that the best applications of cryptocurrencies in artificial intelligence are:
Provide decentralized computing resources
Model testing, fine-tuning and verification
Copyright protection and data integrity
Artificial Intelligence Security
Identity
Cryptocurrency is very useful for artificial intelligence because it already solves many of the current and future challenges of artificial intelligence. In essence, cryptocurrencies solve the coordination problem. Cryptocurrencies bring together human, computational, and monetary resources to run open source software. It does this by providing rewards to those who create, support and use each blockchain network in the form of tokens tied to the value of each network. This reward system can be used to guide different components of the AI value stack. An important implication of combining cryptography with artificial intelligence is to leverage cryptocurrency incentives to develop the necessary physical infrastructure, such as GPU clusters, dedicated to training, fine-tuning, and supporting the use of generative models.
Blockchain also brings transparency to digital ownership, which may help resolve some of the open source software issues that artificial intelligence will face in court. Already high-profile is the New York Times lawsuit against OpenAI and Microsoft. That is, encryption can transparently prove ownership and copyright protection of data owners, model builders, and model users. This transparency will also extend to publishing mathematical proofs of a model’s validity onto a public blockchain. Finally, due to unforgeable digital signatures and data integrity, we believe public blockchains will help mitigate identification and security issues that would otherwise undermine the effectiveness of AI.
Source: Morgan Stanley, Bloomberg, VanEck Research As of January 2024 January 29th. Past performance is no guarantee of future results. The information, valuation scenarios and price targets provided in this blog are not intended to serve as financial advice or any call to action, recommendation to buy or sell, or as a prediction of the future performance of the AI business. Actual future performance is unknown and may differ materially from the hypothetical results described herein. There may be risks or other factors not considered in the scenarios presented that could hinder performance. These are merely simulation results based on our research and are for illustrative purposes only. Please do your own research and draw your own conclusions.
To forecast the market for crypto-AI, we first estimate the total addressable market (TAM) of business productivity gains from AI, Our baseline for this number comes from McKinsey's 2022 assumptions. We then apply economic and productivity growth assumptions to the McKinsey data and find a base case of a TAM of $5.85T in 2030. In this base case, we assume AI productivity growth is 50% higher than GDP growth, which is 3%. We then forecast AI market penetration in global enterprises (33% in the base case) and apply this to our initial TAM, estimating that AI will deliver $1.93T in productivity gains to enterprises. To calculate revenue for all AI businesses, we assume that 13% of these productivity gains are captured by AI businesses (or spent by enterprise consumers) as revenue. We estimate the AI revenue share by applying the average revenue share of labor costs among S&P 500 companies, assuming that AI spending should be similar. The next part of our analysis applies Bloomberg Intelligence's forecasts of the AI value stack distribution to estimate annual revenue for each AI business group. Finally, we provide specific estimates of the cryptocurrency market share of each AI business to arrive at final figures for each case and each market.
We envision a future where decentralized AI models built using open source public repositories are applied to every imaginable use case. In many cases, these open source models outperform centralized AI creation. The basis for this assumption stems from the assumption that open source communities bring together enthusiasts and hobbyists who have unique motivations to improve things. We've seen open source internet projects disrupt traditional businesses. The best examples of this phenomenon are Wikipedia, which effectively ended the commercial encyclopedia business, and Twitter, which disrupted the news media. These open source communities succeed where traditional enterprises fail because open source groups coordinate and inspire people to deliver value through a combination of social influence, ideology, and group solidarity. In short, care.
Combining open source AI models with cryptocurrency incentives can expand the reach of these emerging communities and give them the financial power to create the necessary infrastructure to Attract new participants. Applying this premise to artificial intelligence would be a fascinating combination of passion and financial resources. Artificial intelligence models will be tested in cryptocurrency incentive competitions, establishing an environment for model evaluation benchmarks. In this environment, the most effective models and evaluation criteria win because the value of each model is clearly quantified. Therefore, in our base case, we expect blockchain-generated AI models to account for 5% of all AI software revenue. This estimate includes hardware, software, services, advertising, games and more, reflecting shifts in the volume of business operations. Of the total AI software revenue, we expect this to account for about half of all AI revenue, or about $125.50B. Therefore, we estimate that 5% market share of open source models equivalent to $6.27B in revenue will be spent on crypto token-backed AI models.
We estimate that TAM for compute (or AI infrastructure as a service) for fine-tuning, training, and inference may reach $47.44B by 2030. As AI becomes widely adopted, it will become integral to many functions of the world economy, and the provision of computing and storage can be envisioned as a utility similar to electricity generation and distribution. In this dynamic, the vast majority of "baseload" will come from GPU cloud hyperscalers such as Amazon and Google, whose market share will approximate a Pareto distribution of 80%. We see blockchain-assigned backend server infrastructure catering to specialized needs and acting as a "peak" provider during periods of high network demand. For producers of custom AI models, cryptographic storage and compute providers offer benefits such as on-demand service delivery, shorter SLA lock-in periods, more customized compute environments, and greater latency sensitivity. Additionally, decentralized GPUs can be seamlessly integrated with decentralized AI models in smart contracts, enabling permissionless use cases where AI agents scale their own computing needs. Thinking of the GPU provided by the blockchain as the Uber/Lyft equivalent of artificial intelligence computing infrastructure, we believe that the computing and storage provided by the blockchain will account for 20% of the non-hyperscale market for artificial intelligence infrastructure, which may lead to the Reaching $1.90B.
Defining "identity" in the context of artificial intelligence agents and models through provable on-chain humanity can be seen as a witch defense mechanism for the world's computer networks. We can estimate the cost of this service by examining the fees associated with securing different blockchain networks. In 2023, the cost of Bitcoin, Ethereum, and Solana will be approximately 1.71%, 4.3%, and 5.57%, respectively, of the inflation issuance value of each network. Conservatively, we can deduce that identity recognition should account for around 3.5% of the AI market. Considering the TAM for AI software is $125.5B, this corresponds to $8.78B in annual revenue. Because we believe cryptocurrencies offer the best solution to the identity problem, we believe it will capture 10% of this end market, with annual revenue expected to be approximately $878 million.
AI security is expected to become another important component of AI devices, with its basic requirement being to use uncorrupted, relevant and up-to-date data to verify that the model is running. correct. As AI expands into applications where human lives are at risk, such as self-driving cars, factory robots, and healthcare systems, the tolerance for failure becomes slim. The need for accountability in the event of an accident will drive the insurance market to require concrete proof of safety. Public blockchains are ideal for this functionality because they can publish “proofs of security” on an immutable ledger that can be seen by anyone. This business can be considered similar to compliance for financial institutions. Considering that U.S. commercial and investment banks generate $660B in revenue while spending $58.75B in compliance costs (8.9% of revenue), we estimate that AI security should account for approximately $22.34B of the $251B AI TAM. While cryptocurrencies have the potential to enhance AI security, given the U.S. government’s focus on AI, we believe much of AI compliance will be centralized. Therefore, we estimate that cryptocurrencies will account for around 5% of this market, or around $1.12B.
Cryptocurrencies can orchestrate their enormous social and financial Advantages should be applied to democratize access to computing, thereby solving the pain points currently plaguing AI developers. In addition to high costs and limited access to quality GPUs, AI model builders currently face other thorny issues. These include vendor lock-in, lack of security, limited compute availability, poor latency, and geofencing required by state laws.
Cryptocurrency’s ability to meet artificial intelligence’s demand for GPUs stems from its ability to pool resources through token incentives. The Bitcoin network has a token value of $850B and an equity value of $20B, which is a testament to this capability. Therefore, both current Bitcoin miners and the promising decentralized GPU market have the potential to add significant value to artificial intelligence by providing decentralized computing.
A useful analogy for understanding the provision of GPUs via blockchain is the power generation business. Simply put, there are entities that operate large, expensive plants that can reliably generate electricity to meet most grid needs. These "baseload" plants have stable demand but require significant capital investments to build, resulting in relatively low but guaranteed returns on capital. Supplementing the base load is another type of generator called "peak power". These companies provide electricity when demand exceeds baseload generation capacity. This involves high-cost, small-scale energy production strategically positioned close to the demand for that energy. We expect similar dynamics to emerge in the "on-demand computing" space.
Bitcoin and other proof-of-work Cryptocurrencies, like artificial intelligence, have high energy demands. This energy must be created, harvested, transported, and broken down into usable electricity to power mining equipment and computing clusters. The supply chain requires miners to make significant investments in power plants, power purchase agreements, grid infrastructure and data center facilities. The monetary incentives offered by mining PoW cryptocurrencies have led to the emergence of many globally distributed Bitcoin miners with energy and power rights and an integrated grid architecture. Much of this energy comes from lower-cost, carbon-intensive sources that society avoids. Therefore, the most compelling value proposition that Bitcoin miners can offer is low-cost energy infrastructure to power AI backend infrastructure.
Hyperscale computing providers such as AWS and Microsoft have pursued strategies of investing in vertically integrated operations and building their own energy ecosystems. Big tech companies have moved upstream, designing their own chips and sourcing their own energy, much of it renewable. Currently, data centers consume two-thirds of the renewable energy available to U.S. businesses. Microsoft and Amazon have both committed to 100% renewable energy by 2025. However, if expected computing demand exceeds expectations, as some say, the number of AI-focused data centers could double by 2027, and capital expenditures could triple current estimates. Big tech companies already pay $0.06-0.10/kWh for electricity, much more expensive than what competitive Bitcoin miners typically pay (0.03-0.05 kWh). If the energy demands of AI exceed the current infrastructure plans of big tech companies, then Bitcoin miners’ power cost advantage over hyperscale miners could increase significantly. Miners are increasingly attracted to the high-margin artificial intelligence business associated with GPU supply. Notably, Hive reported in October that its HPC and AI business generated 15 times more revenue than Bitcoin mining on a per megawatt basis. Other Bitcoin miners seizing the opportunity in artificial intelligence include Hut 8 and Applied Digital.
Bitcoin miners have experienced growth in this new market, which has helped diversify revenue and enhance earnings reports. During Hut 8's Q3 2023 analyst call, CEO Jaime Leverton said: "In our HPC business, we created some momentum in the third quarter with the addition of new customers and growth with existing customers. Last week, we launched an on-demand cloud service that provides Kubernetes-based applications that can support artificial intelligence, machine learning, visual effects and rendering workloads to customers seeking HPC services from our GPUs. This The service puts control in the hands of our customers while reducing provisioning time from days to minutes, which is particularly attractive for those looking for short-term HPC projects. Hut 8 has achieved Q3 2023 HPC business Revenue was $4.5 million, accounting for more than 25% of the company's revenue during the same period. Growing demand for HPC services and new products should aid future growth of this business line, and with the Bitcoin halving approaching, HPC revenue may Could soon surpass mining revenue, depending on market conditions.
While their business sounds promising, Bitcoin miners turning to artificial intelligence May be stuck due to a lack of data center construction skills or the inability to expand power supply. These miners may also find challenges related to operational overhead due to the cost of hiring new data center-focused sales staff. Additionally, current Mining operations do not have adequate network latency or bandwidth because their optimization for cheap energy causes them to be located in remote locations that often lack high-speed fiber optic connections.
We also see a long tail of compute-centric crypto projects that will occupy a small but significant portion of the AI server resource market A big part. These entities will coordinate computing clusters beyond hyperscale to deliver a value proposition tailored to the needs of upstart AI builders. The benefits of decentralized computing include customizability, open access, and better contract terms. These Blockchain-based computing companies enable smaller AI players to avoid the huge expense and general unavailability of high-end GPUs like the H100 and A100. Crypto AI businesses will do so by creating a network of physical infrastructure built around crypto token incentives Meet demand while providing proprietary IP to create software infrastructure to optimize computing usage for artificial intelligence applications. The blockchain computing project will use market methods and cryptographic rewards to source funds from independent data centers, entities with excess computing power, and pre-PoW Cheaper computation is found in miners. Some projects providing decentralized computation for AI models include Akash, Render, and io.net.
Akash Daily income. Source: Cloudmos As of January 30, 2024. Past performance is no guarantee of future results.
Akash is a project based on Cosmos and can be considered a general decentralized "super cloud", providing CPU, GPU, memory and storage. In effect, it is a two-way marketplace that connects cloud service users and cloud service providers. Akash’s software is designed to coordinate computing supply and demand while creating tools that facilitate the training, fine-tuning, and operation of AI models. Akash also ensures that the marketplace Buyers and sellers fulfill their obligations honestly. Akash is coordinated through its $AKT token, which can be used to pay for cloud services at a discount. $AKT also serves as an incentive mechanism for GPU compute providers and other network participants. In On the supply side, Akash has made great strides in adding compute vendors, as there are 65 different vendors in the Akash marketplace. Although compute demand has been sluggish leading up to the debut of Akash's AI Super Cloud on August 31, 2023,
Render, which recently migrated to Solana, initially focused on connecting artists with dispersed groups that would provide GPU power to render images and video. However, Render has begun focusing its decentralized GPU clusters on meeting machine learning workloads to support deep learning models. Through network improvement proposal RNP-004, Render now has an API to connect to external networks (such as io.net), The network will leverage Render's GPU network for machine learning. A subsequent proposal from the rendering community was adopted to allow access to their GPUs through Beam and FEDML for machine learning tasks. As a result, Render has become a decentralized facilitator for GPU workloads, Coordinated through payments of RNDR dollars to providers and RNDR incentives to entities running network back-end infrastructure.
Io.net GPU Price Comparison. Source: io.net as of January 4, 2024.
Another interesting project on Solana is io.net, which is Think of it as DePIN or the Decentralized Physical Infrastructure Network. The purpose of io.net is also to provide GPU, but its focus is only on applying GPU to drive AI models. In addition to simply coordinating computations, Io.net has added more services to its core stack. Its system claims to handle all components of AI, including creation, consumption and fine-tuning to properly facilitate and troubleshoot AI workloads across the network. The project also leverages other decentralized GPU networks such as Render and Filecoin as well as its own GPUs. Although io.net currently lacks tokens, it is planned to launch in the first quarter of 2024.
However, due to the typical 633TB+ proposed network The need to leverage the distributed computing data required to train deep learning models remains a challenge. Computer systems located around the world also present new obstacles to parallel model training due to delays and differences in computer capabilities. One company that is aggressively moving into the open source base model market is Together, which is building a decentralized cloud to host open source AI models. Together will enable researchers, developers and companies to leverage and improve AI through an intuitive platform that combines data, models and computation, expanding the accessibility of AI and powering the next generation of technology companies. Working with leading academic research institutions, Together built the Together Research Computer to enable labs to centralize computing for AI research. The company also collaborated with Stanford's Center for Research on Fundamental Models (CRFM) to create the Holistic Evaluation of Language Models (HELM). HELM is a “living benchmark” designed to increase transparency in artificial intelligence by providing a standardized framework for evaluating such underlying models.
Since the establishment of Together, founder Vipul Ved Prakash has taken the lead in launching multiple projects, including 1) GPT-JT, an open LLM with the ability to pass< 6B parametric model trained on 1Gbps link, 2) OpenChatKit, a powerful open source foundation for creating specialized and general purpose chatbots, and 3) RedPajama, a project to create leading open source models with the goal of becoming both research and commercial applications Foundation. The Together platform is a foundational model consisting of an open model on commodity hardware, a decentralized cloud and a comprehensive developer cloud, bringing together different computing sources including consumer miners, crypto mining farms, T2-T4 cloud providers and academic calculate.
GPT-JT performance. Source: The Decoder as of January 4, 2024.
We believe that decentralized and democratized cloud computing solutions like Together can significantly cut the cost of building new models, thereby having It could disrupt and compete with established giants such as Amazon Web Services, Google Cloud and Azure. For context, comparing AWS Capacity Blocks and AWS p5.48xlarge instances to a Together GPU cluster configured with the same number of H100 SXM5 GPUs, Together is priced approximately 4x lower than AWS.
As Open LLM becomes more accurate and more widely adopted, Together may become the industry standard for open source models, much like Red Hat’s Same as Linux. Competitors in the space include model providers Stability A and HuggingFace, and AI cloud providers Gensyn and Coreweave.
Blockchain and cryptocurrency incentives Proof that network effects and rewards related to the size of the network effect force people to do useful work. In the context of Bitcoin mining, the task is to secure the Bitcoin network through the use of expensive electricity, technical manpower, and ASIC machines. This coordination of economic resources provides a Sybil attack defense mechanism against economic attacks on Bitcoin. In exchange, miners who coordinate these resources will receive BTC dollars. However, the green space for useful work in AI is much larger, and several projects are already driving improvements in AI and machine learning models.
The most original of these projects is Numerai. Currently, Numerai can be considered a decentralized data science tournament aimed at identifying the best machine learning models to optimize financial returns by building stock portfolios. In each epoch, anonymous Numerai participants are given access to hidden raw data and asked to leverage this data to build the best-performing stock portfolio. In order to participate, users are not only required to submit predictions but are also forced to stake NMR tokens behind their models’ predictions in order to prove the value of those models. Other users can also stake tokens on the model they believe performs best. The output of each pledged, submitted model is then fed into a machine learning algorithm to create a meta-model that informs the Numerai One hedge fund’s investment decisions. Users who submit “inferences” with the best information coefficient or validity will be rewarded with NMR tokens. At the same time, those who staked the worst models will have their tokens slashed (confiscated and reused to reward winners).
Subnets and use cases on Bittensor. Source: https://taostats.io/api/ As of January 2, 2024.
Bittensor is a similar project that massively extends Numerai's core concepts. Bittensor can be considered the “Bitcoin of machine intelligence” because it is a network that provides economic incentives for AI/ML models. This is done by “miners” who build AI models and “validator” entities who evaluate the quality of the output of these models. Bittensor's architecture is that of a base network and many smaller subnets (subnets). Each subnetwork focuses on a different area of machine intelligence. Validators pose various questions or requests to miners on these subnets to assess the quality of their AI models.
The best-performing model will receive the highest TAO token reward, while validators are compensated for their accurate evaluation of miners. At a high level, both validators and miners must stake tokens to participate in each subnet, and each subnet's proportion of total staking determines how many TAO tokens it receives from the total inflation of all Bittensors. Therefore, each miner not only has an incentive to optimize his model to win the most rewards, but also has an incentive to focus his model on the best artificial intelligence domain subnet. Additionally, since miners and validators must maintain funds in order to participate, everyone must exceed the capital cost barrier or exit the system.
As of January 2024, there are 32 different subnets, each dedicated to a specific area of machine learning or artificial intelligence. For example, Subnet 1 is text similar to ChatGPT's prompt LLM. On this subnet, miners run various versions of LLM that are tuned to best respond to validator prompts that evaluate the quality of responses. On subnet 8 called "Taoshi", miners submit short-term predictions for the price of Bitcoin and various financial assets. Bittensor also has subnets dedicated to human language translation, storage, audio, web scraping, machine translation, and image generation. Subnet creation is permissionless and anyone with 200 TAO can create a subnet. Subnet operators are responsible for creating evaluation and reward mechanisms for each subnet's activities. For example, Opentensor, the basis behind Bittensor, and Cerebras evaluate the LLM output of miners on this subnet.
While these subnets are initially fully subsidized by inflation incentives, each subnet must eventually sustain itself financially. Therefore, subnet operators and validators must coordinate to create tools that allow external users to access each subnet's services for a fee. As inflationary TAO rewards decrease, each subnet will increasingly rely on external revenue to sustain itself. In this competitive environment, there is direct economic pressure to create the best models as well as incentives for others to create profitablereal-worldapplications of those models. Bittensor is unlocking the potential of AI by leveraging scrappy small businesses to identify and monetize AI models. As noted Bittensor evangelist MogMachine puts it, this dynamic can be viewed as a “Darwinian competition for artificial intelligence.”
Another interesting project is using cryptography to incentivize the creation of artificial intelligence agents that are programmed to complete tasks autonomously on behalf of humans or other computer programs . These entities are essentially adaptive computer programs designed to solve specific problems. Agent is a catch-all term that encompasses chatbots, automated trading strategies, game characters, and even virtual universe assistants. One notable project in this space is Altered State Machine, a platform that uses NFTs to create owned, powered, and trained artificial intelligence agents. In Altered State Machine, users create their “agents” and then “train” them using a decentralized cluster of GPUs. These agents are optimized for specific use cases. Another project, Fetch.ai, is a platform for creating agents customized to the needs of each user. Fetch.ai is also a SaaS business that allows for registration and leasing or selling of agents.
Source: Artemis XYZ As of January 10, 2024. Past performance is no guarantee of future results.
2023 is A landmark year for new AI models, with OpenAI launching ChatGPT, Meta launching LLAMA-2, and Google launching BERT. Due to the promise of deep learning, there are more than 18,563 artificial intelligence-related startups in the United States as of June 2023. These startups and others have produced thousands of new base and fine-tuned models. However, in a space where $1 of every $4 in venture capital is invested in AI-related companies, the proliferation of many new entities should be cause for serious concern.
Who actually creates and owns each model?
Is the output actually produced by the specified model?
Is this model really as effective as advertised?
What is the data source for each model and who owns that data?
Does training, fine-tuning, and/or inference violate any copyright or data rights?
Both investors and users of these models should be 100% sure they can solve these problems. Currently, many benchmarks exist for different components of LLM output, such as HumanEval for code generation, Chatbot Arena for LLM auxiliary tasks, and ARC Benchmark for LLM inference capabilities. However, despite attempts at model transparency, such as Hugging Face’s Open LLM Leaderboard, there is no concrete evidence of the model’s validity, ultimate provenance, or the origin of its training/inference data. Not only can the benchmark be gamed, but there is no way to determine whether a particular model is actually running (as opposed to using an API that connects to another model), nor is there any guarantee that the leaderboard itself is honest.
This is the unification of public blockchain, artificial intelligence, and a cutting-edge field of mathematics called zero-knowledge (zk) proofs. A zk proof is an application of cryptography that allows someone to prove to a desired level of mathematical certainty that a statement they make about data is correct without revealing the underlying data to anyone. Statements can include simple statements (such as rankings) but can be extended to complex mathematical calculations. For example, not only can someone demonstrate that he or she knows the relative wealth of a sample without revealing that wealth to another party, but he or she can also demonstrate the correct calculation of the mean and standard deviation of the group. Essentially, you can demonstrate that you understand the data and/or that you made true assertions using that data, without revealing the details of that data or how you performed the calculations. Outside of AI, proof-of-zk has been used to scale Ethereum, allowing transactions to occur off-chain on layer 2 blockchains. Recently, zk proofs have been applied to deep learning models to prove:
Using specific data Generate a model or provide inference output (also, what data/sources were not used)
Use a specific model to generate inferences
The inference output has not been tampered
zk proofs can be published to a public, permanent blockchain and verified via smart contracts. The result is that blockchain can publicly and irrefutably prove important properties of artificial intelligence models. Two cutting-edge projects applying ZK to AI are called "zero-knowledge machine learning" (ZKML), namely EZKL and Modulus. EZKL uses the Halo2 proof system to generate zk-snarks, which are zero-knowledge proofs that can then be publicly verified on Ethereum’s EVM. EZKL CEO Jason Morton said that although the model size that EZKL can currently prove is relatively small, about 100M parameters, while ChatGPT 4's parameters are 175B, he believes that they are considering "engineering issues" rather than "technical limitations". EZKL believes they can overcome the proof problem by splitting the proof for execution in parallel, thereby reducing memory constraints and computation time. In fact, Jason Morton believes that one day, "verifying a model will be as simple as signing a blockchain transaction."
ZKML proof applied to artificial intelligence can Solve important pain points in artificial intelligence implementation, including copyright issues and artificial intelligence security. As the New York Times’ recent lawsuit against Open AI and Microsoft illustrates, copyright law will apply to data ownership, and AI projects will be forced to provide proof of the provenance of their data. ZKML technology can be used to quickly resolve disputes over model and data ownership in court. In fact, one of the best applications of ZKML is allowing data/model marketplaces like Ocean Protocol and SingularityNet to prove the authenticity and validity of their lists.
AI models will eventually expand into areas where accuracy and safety are critical. It is estimated that by 2027, there will be 5.8B AI edge devices, which may include heavy machinery, robots, autonomous drones and vehicles. Since machine intelligence is applied to things that can hurt and kill, it's important to use high-quality data from reliable sources to prove that a reputable model is running on the device. While it may be economically and technically challenging to build continuous real-time proofs from these edge devices and publish them to the blockchain, it may be more feasible to validate the model upon activation or publish to the blockchain periodically. However, the Zupass Foundation from 0xPARC has built primitive proofs derived from "proofs of carrying data" that can cheaply establish proof offacts happening on edge devices. Currently, this is about event attendance, but it can be expected that this will soon migrate to other areas such as identity and even healthcare.
Robot-assisted surgery. Source: MIT Technology Review as of January 30, 2024.
From the perspective of businesses that may be held liable for equipment failure, having verifiable evidence to prove their model is not costly The source of the accident seemed ideal. Likewise, from an insurance perspective, it may become economically necessary to validate and prove the use of reliable models trained on actual data. Likewise, in a world of AI deepfakes, leveraging cameras, phones, and computers that are verified and certified by blockchain to perform various actions may become the norm. Of course, proof of the authenticity and accuracy of these devices should be posted to a public open source ledger to prevent tampering and fraud.
While these proofs hold great promise, they are currently limited by gas costs and computational overhead. At the current ETH price, submitting a proof on-chain costs approximately 300-500k Gas (approximately $35-58 at the current ETH price). From a computational perspective, Eigenlayer's Sreeram Kennan estimates that "a proof computation that would cost $50 to run on AWS would cost about 1,000,000 times more using current ZK proof technology." As a result, the development of ZK proofs It's much faster than anyone expected just a few years ago, but there's still a long way to go before real use cases open up. Suppose someone is curious about the applications of ZKML. In this case, they can participate in a decentralized singing competition judged by a proven on-chain smart contract model and have their results permanently uploaded to the blockchain.
Broad, advanced machine intelligence One possible consequence is that autonomous agents will become the most prolific Internet users. The release of AI agents will most likely result in entire networks being disrupted by purposeful bot-generated spam or even harmless task-based agents clogging the network (“get rid of spam”). Solana had 100 gigabytes of data traffic per second as bots competed for an arbitrage opportunity worth roughly $100,000. Imagine the flood of web traffic that will occur when AI agents can hold millions of corporate websites hostage and extort billions of dollars. This suggests that the future Internet will impose restrictions on non-human traffic. One of the best ways to limit such attacks is to impose an economic tax on the overuse of cheap resources. But how do we determine the best framework for spam charging, and how do we determine humanity?
Fortunately, blockchains already employ built-in defenses to prevent AI bot-style Sybil attacks. A combination of metering non-human users and charging non-human users would be an ideal implementation, while slightly more computationally heavy work (like Hashcash) would inhibit bots. In terms of proof of humanity, blockchain has long struggled with overcoming anonymity to unlock activities such as undercollateralized lending and other reputation-based activities.
One way to gain momentum for proving identity is to use JSON Web Tokens (JWT). JWTs are "0Auth" credentials, similar to "cookies", that are generated when you log in to sites like Google. They allow you to reveal your Google identity when you visit sites across the Internet when you're signed in to Google. Created by L1 blockchain Sui, zkLogin allows users to link their wallet private keys and actions to a Google or Facebook account that generates a JWT. Zero-to-Peer extends this concept further, using JWT to allow users to exchange fiat for cryptocurrencies on the Base blockchain permissionlessly. This is done by confirming peer-to-peer cash transfers via payment app Venmo, which when confirmed via email JWT, unlocks USDC tokens escrowed by the smart contract. The result of both projects is that they have strong ties to off-chain identities. For example, zkLogin connects wallet addresses to Google identities, while zkP2P is only available for Venmo’s KYC users. While both lack solid guarantees robust enough to enable on-chain identity, they create important building blocks that others can use.
While many projects have attempted to confirm the human identity of blockchain users, the boldest is WorldCoin, founded by OpenAI CEO Sam Altman. Although controversial because users must scan their irises to use the dystopian “Orb” machine, WorldCoin is moving towards an immutable identity system that cannot be easily counterfeited or overwhelmed by machine intelligence. This is because WorldCoin creates a cryptographic identifier based on each person’s unique eye “fingerprint,” which can be sampled to ensure uniqueness and authenticity. Once verified, the user receives a digital passport called a World ID on the Optimism blockchain, allowing the user to prove their humanity on the blockchain. Best of all, a person's unique signature is never revealed and cannot be traced because it is encrypted. World ID simply asserts that the blockchain address belongs to a human being. Projects like Checkmate already link World IDs to social media profiles to ensure users’ uniqueness and authenticity. In an AI-led future internet, explicit demonstrations of humanity in every online interaction may become commonplace. When artificial intelligence overcomes the limitations of CAPTCHAs, blockchain applications can prove identities cheaply, quickly, and concretely.
There is no doubt that we We are in the early stages of the artificial intelligence revolution. However, if the growth trajectory of machine intelligence is in line with the boldest predictions, AI must face challenges to stand out while containing its potential harm. We believe that cryptocurrencies are the ideal grid to properly “train” outcome-rich but potentially insidious artificial intelligence plants. Blockchain’s set of AI solutions can increase the output of machine intelligence creators by providing them with more responsive, flexible, and potentially cheaper decentralized computing. It also incentivizes builders who can create better models while providing a financial incentive for others to build useful businesses using these AI models. Equally important, model owners can demonstrate the validity of their models while demonstrating that protected data sources are not used. For AI users, cryptographic applications can be useful to confirm that the models they are running meet security standards.
Source: VanEck Research, project website, as of January 15, 2024.
Links to third-party websites are provided as a convenience, and the inclusion of such links does not imply that we endorse the content contained in or available from the linked website. No endorsement, approval, investigation, verification or monitoring of any content or information accessed from the linked websites. By clicking on a link to a non-VanEck web page, you acknowledge that the third party website you enter is subject to its terms and conditions. VanEck is not responsible for the content, legality or suitability of access to third party websites.
Disclosure: VanEck has a position in Together through our strategic partnership with early-stage venture capital manager Cadenza, who has kindly provided Contributed to the section "Overcoming the Bottlenecks of Decentralized Computing".
Special thanks to:
Jason· Morton, CEO of ZKML
Ala Shabana, co-founder of Bittensor
Arrash Yasavolian, founder of Bittensor's Taoshi subnet
Greg Osuri, Akash CEO and Founder
ZkP2P CEO Liang Zeqiang
Key members of the Sui blockchain team – Sam Blackshear, Nihar Shah, Sina Nader, Alonso Gortari
Matrixport sees up to $25 billion potentially channelled into new Bitcoin ETFs in Hong Kong through the Stock Connect, pending regulatory approval.
BrianLayer 2, ETH, VanEck: Ethereum L2 market value estimation in 2030 Golden Finance, evaluates 5 key areas of Ethereum Layer 2.
JinseFinanceWood said ARK Invest now believes its previous bull market target of $1 million in 2030 was too conservative.
JinseFinanceIf you, like me, believe in the long-term vision and value proposition of Bitcoin, and are prepared to take calculated risks and commit to the long term, you may find an opportunity to realize your financial aspirations with this breakthrough technology!
JinseFinanceThe study shows that the number of central banks actively exploring CBDCs has grown from 90% in 2021 to 93% in 2022.
CoinliveGain valuable insights on Web3 trends as well as its forecast for 2023 through moderated roundtable discussions by experts in the industry.
CatherineTitled "Web3 Trends and Forecast 2023 ─ VC Edition", the event will delve into the invited speakers’ thoughts on current trends and their forecast for the year.
CatherineBitcoin has been declared 'dead' 463 times. Is the S2F price prediction model still valid?
BeincryptoThe Indonesian government sees the exchange as a means of protecting consumers as interest in digital currencies has risen, deputy trade minister Jerry Sambuaga said on Wednesday.
Coindesk