Author: Mitch Liu, CoinDesk; Translator: Baishui, Golden Finance
Artificial intelligence’s demand for resources is endless. It consumes a lot of electricity and data, and is expected to consume 460 terawatt hours in 2022, and will increase sharply to 620 to 1,050 terawatt hours by 2026. However, its most urgent need is computing: processing power to support complex model training, massive data set analysis, and large-scale reasoning execution.
This thirst for computing has reshaped many of our professional fields. In 2024, the global artificial intelligence market size exceeded $184 billion, and it is expected that by 2030 it may exceed $800 billion - a value comparable to Poland’s current GDP. The industry’s most famous product, ChatGPT, had 100 million active users just two months after its launch in November 2022.
However, as AI products such as ChatGPT increase and develop, our view of how AI works is quickly becoming outdated. The popular image of AI—huge data centers, huge power bills, controlled by tech giants—no longer tells the full story. This view has led many to believe that meaningful AI development is the exclusive domain of well-funded corporations and big tech companies.
A new vision for AI is emerging that looks to the untapped potential in our pockets. This approach aims to democratize AI by harnessing the collective power of the world’s billions of smartphones. Our mobile devices sit idle for hours every day, their processing power lying dormant. By tapping into this vast reservoir of unused computing power, we can reshape the AI landscape. Rather than relying solely on centralized corporate infrastructure, AI development can be powered by a global network of everyday devices.
Untapped Potential
Smartphones and tablets represent a vast, untapped reservoir of global computing power. With shipments expected to reach 1.21 billion units in 2024 alone, the true potential of idle compute this provides is hard to calculate.
Mobile initiatives like Theta EdgeCloud aim to leverage this distributed network of consumer-grade GPUs for AI computing. The shift from centralized computing to edge computing is a technological revolution that has the power to revolutionize the way people interact with and power AI models.
By processing data locally on mobile devices, the industry promises lower latency, enhanced privacy, and reduced bandwidth usage. This approach is especially important for real-time applications like self-driving cars, augmented reality, and personalized AI assistants. The edge is where new AI use cases will take off,especially those for personal use. Powering these programs will not only become more affordable, but also more responsive and customizable, a win-win for consumers and researchers alike.
Blockchain is perfectly designed for this distributed AI ecosystem. Their decentralized nature aligns perfectly with the goal of harnessing the idle computing power of millions of devices around the world. By leveraging blockchain technology, we can create a secure, transparent, and incentivized framework for sharing computing resources.
The key innovation here is the use of off-chain verification. While on-chain verification can create bottlenecks in a network of millions of parallel devices, the off-chain approach allows these devices to work together seamlessly, unaffected by individual connectivity issues. This approach can create a trustless system where device owners can contribute to AI development without compromising their security or privacy.
The model draws on the concept of “federated learning,” a distributed machine learning approach that scales to large amounts of data on mobile devices while protecting user privacy. Blockchain provides both the infrastructure for this network and a mechanism to reward participants, incentivizing widespread participation.
The synergy between blockchain and edge AI is fostering a new ecosystem that is more resilient, efficient, and inclusive than traditional centralized models. It democratizes AI development, allowing individuals to participate in and benefit from the AI revolution directly from their mobile devices.
Overcoming Technical Challenges
AI training and inference can be performed on a variety of GPU types, including consumer-grade GPUs in mobile devices. The hardware powering our mobile devices has been steadily improving since smartphones hit the market, and there are no signs of slowing down. Industry-leading mobile GPUs like Apple’s A17 Pro and Qualcomm’s Adreno 750 (used in high-end Android devices like the Samsung Galaxy and Google Pixel) are redefining the AI tasks that can be accomplished on mobile devices.
Now, new chips designed specifically for consumer AI computing, called neural processing units (NPUs), are being produced that enable on-device AI use cases while managing the thermal and battery power constraints of mobile devices. Add in smart system design and architecture, and jobs can be routed to the hardware best suited for the job, and the network effects created can be powerful.
While the potential of edge AI is huge, it still faces a host of challenges. Optimizing AI algorithms for a variety of mobile hardware, ensuring consistent performance under varying network conditions, addressing latency issues, and maintaining security are all key hurdles. However, ongoing research in AI and mobile technology is steadily addressing these challenges, paving the way for this vision to become a reality.
Enterprise to Community
One of the biggest complaints about the development of artificial intelligence, and the most justified one, is that it consumes a staggering amount of power. Large data centers also require large tracts of land to build physical infrastructure, as well as a staggering amount of power to keep them online. Mobile mode can mitigate many of these environmental impacts by using spare GPUs in existing devices (rather than relying on GPUs in centralized data centers), thereby improving efficiency and reducing carbon emissions. Its potential impact on our environment cannot be underestimated.
The shift of AI to edge computing will also fundamentally change who can participate in supporting AI networks and who can profit from them. No longer will enterprises with data centers be closed off. Instead, the doors will open and individual developers, small businesses, and even hobbyists will be able to access AI networks.
Empowering a greater number of users and supporters will also enable faster and more open development, helping to curb the much-discussed and feared stagnation of the industry. Increased accessibility will also lead to more diverse applications, solving niche problems and underserved communities that might otherwise have been overlooked.
The economic impact of this shift will be profound. By allowing individuals and small and medium-sized organizations to monetize the idle computing power of their devices, new revenue streams will emerge. It also opens up new markets for consumer-grade AI hardware and edge-optimized software.
The future of AI innovation lies not in building bigger data centers, but in harnessing the power that already exists in our pockets and homes. By shifting the focus to edge computing, a more inclusive, efficient, and innovative AI ecosystem can emerge. This decentralized approach not only democratizes AI, but also aligns with global sustainable development goals, ensuring that the benefits of AI are accessible to all, not just the privileged few.