Ethereum co-founder Vitalik Buterin has highlighted the potential of artificial intelligence in enhancing decentralized governance models and aiding users in making more informed decisions. According to Cointelegraph, Buterin expressed his views in a post on X, emphasizing the challenges posed by the "limits to human attention" in democratic and decentralized governance systems, such as DAOs. He noted that the complexity and volume of decisions often require expertise and time that many participants lack.
Buterin pointed out that the common solution of delegation can be disempowering, as it often results in a small group of delegates controlling decision-making, leaving their supporters without influence after delegating their votes. Participation rates in DAOs are typically between 15% and 25%, which can lead to centralization of power and ineffective decision-making. In worst-case scenarios, governance attacks may occur, where a malicious actor gains enough tokens to pass harmful proposals unnoticed by other members.
To address these issues, Buterin suggests the use of personal assistant large language models (LLMs) to tackle the "attention problem" by equipping users with the necessary information for voting. These personal agents could perform votes on behalf of users based on inferred preferences from their personal writing, conversation history, and direct statements. If the agent is uncertain about a user's stance on an important issue, it should seek direct input from the user, providing all relevant context.
Lane Rettig, a researcher at the Near Foundation, shared a similar vision with Cointelegraph last year, discussing AI-powered digital twins that could vote on behalf of DAO members to improve voter participation. Buterin also addressed the challenge of handling private or sensitive information in decentralized governance, which is crucial during negotiations, internal disputes, or funding decisions. He suggested that organizations typically appoint individuals with significant power to manage such tasks.
As an alternative, Buterin proposed that users could submit their "personal LLM into a black box," allowing the LLM to access private information, make judgments, and output only those judgments without revealing the private data. This approach emphasizes the importance of privacy protection, as participants would be utilizing more personal information and potentially submitting larger inputs. Buterin stressed the need to safeguard privacy in these processes.