Author: Ari Juels, Blockworks; Compiler: Deng Tong, Golden Finance
Industries around the world are asking "What can artificial intelligence do for us?"
But with The blockchain industry, known for challenging norms, is also asking the opposite question: "What can blockchain do for artificial intelligence?"
While there are some compelling answers, a number of questions surrounding this question have emerged Three claims that are often misleading and in one case may even be dangerous.
Narrative #1: Blockchain can combat misinformation caused by artificial intelligence
Recent Coinbase activity An expert panel on concluded:“Blockchain can combat misinformation by encrypting digital signatures and timestamps, making it clear what is real and what is manipulated.” >
This is true only in a very narrow sense.
Blockchain can record the creation of digital media in a tamper-proof manner, i.e. modifications of specific images can be detected. But this is far from true.
Consider a photo of a flying saucer hovering over the Washington Monument. Suppose someone registers their creation in block 20,000,000 of the Ethereum blockchain. This fact tells you one thing: the flying saucer image was created 20,000,000 blocks ago. Additionally, whoever publishes the image to the blockchain (let’s call her Alice) does so by digitally signing the transaction. Assuming Alice's signing key was not stolen, it is clear that Alice registered the photo on the blockchain.
However, none of this tells you how the image is created. This may be a photo taken by Alice with her own camera. Or Alice might have gotten the image from Bob, who retouched it. Or maybe Carroll created it using generative AI tools. In short, the blockchain won’t tell you whether aliens are visiting Washington, D.C.—unless you already trust Alice in the first place.
Some cameras can digitally sign photos to verify them (assuming their sensors can't be fooled, which is a big assumption), but that's not blockchain technology.
Narrative #2: Blockchain can bring privacy to artificial intelligence
Model Training is a data-intensive operation. The larger the training data set, the better the resulting model. For many applications, training on private user data is critical. For example, creating a good machine learning model to diagnose a medical condition requires data from a real patient population. Handling such highly sensitive data securely is a challenge. Some are touting blockchain technology as a solution.
However, blockchain is designed for transparency—a property that is at odds with confidentiality.
Proponents point to the blockchain industry’s advanced privacy-enhancing technologies as resolving this tension, especially zero-knowledge proofs. However,zero-knowledge proofs do not solve privacy issues in AI model training. This is because zero-knowledge proofs hide no secrets from the person who constructed the proof. Zero-knowledge proofs are helpful if I want to hide my transaction data from you. But they don't allow me to calculate your data privately.
There are other more relevant cryptography and security tools with esoteric names, including fully homomorphic encryption (FHE), secure multi-party computation (MPC), and secure enclaves. These could in principle support privacy-preserving AI. However, each comes with important caveats. Claims that they are blockchain-specific technologies are somewhat exaggerated.
Narrative #3: Blockchain can fund AI robots – and that’s a good thing
< p>Circle CEO Jeremy Allaire noted that bots are already executing transactions using cryptocurrencies, tweeting that "artificial intelligence and blockchain are a match made in heaven." This is true in the sense that cryptocurrencies are a good match for the functionality of artificial intelligence agents. But it's also concerning.
Many people worry that artificial intelligence agents will escape human control. Typical nightmare scenarios include self-driving cars killing people or artificial intelligence-powered autonomous weapons going out of control. But there is another escape route: thefinancial system. Money equals power. Give this ability to an AI agent and it can do real harm.
This question was the subject of a research paper I co-authored in June 2015. My colleagues and I studied the possibility that smart contracts—programs that autonomously mediate transactions on Ethereum—were used to facilitate crime. Using the techniques in the paper and a blockchain oracle system with access to LLMs (large language models) such as ChatGPT, bad actors could in principle launch “rogue” smart contracts that automatically pay bounties to people who commit serious crimes. gold.
Happily, such rogue smart contracts are not yet possible in today’s blockchains, but the blockchain industry and cryptocurrency enthusiasts need to take AI security seriously as a future issue . They need to consider mitigation measures, such as community-driven interventions or guardrails in oracles, to help strengthen AI security.
The integration of blockchain and artificial intelligence has clear prospects. Artificial intelligence can add unprecedented flexibility to blockchain systems by creating natural language interfaces for them. Blockchain can provide a new financial and transparency framework for model training and data sourcing, and put the power of artificial intelligence into the hands of communities, not just enterprises.
It’s still early days, though, and while we describe AI and blockchain as a tantalizing mix of buzzwords and technologies, we need to really think and look at things.