According to Cointelegraph, Sandeep Nailwal, co-founder of Polygon and the open-source AI company Sentient, has expressed skepticism about the potential for artificial intelligence (AI) to develop consciousness. In a recent interview, Nailwal stated that AI lacks the intention inherent in human beings and other biological entities, making it unlikely for AI to achieve a significant level of consciousness. He dismissed the notion of a doomsday scenario where AI becomes self-aware and dominates humanity.
Nailwal criticized the idea that consciousness could emerge randomly from complex chemical interactions or processes. While acknowledging that such processes can lead to the creation of complex cells, he argued that they are insufficient for generating consciousness. Instead, Nailwal voiced concerns about the potential misuse of AI by centralized institutions for surveillance purposes, which could threaten individual freedoms. He emphasized the need for AI to be transparent and democratized, advocating for a global AI controlled by individuals to create a borderless world.
The executive highlighted the importance of each person having a custom AI that operates on their behalf, protecting them from other AIs deployed by powerful entities. This perspective aligns with Sentient's open model approach to AI, contrasting with the opaque methods of centralized platforms. Nailwal's views are echoed by David Holtzman, a former military intelligence professional and chief strategy officer of the Naoris decentralized security protocol. Holtzman warned of the significant privacy risks posed by AI in the near term, suggesting that decentralization could serve as a defense against AI threats.
In October 2024, AI company Anthropic released a paper discussing potential scenarios where AI could sabotage humanity and proposed solutions to mitigate these risks. The paper concluded that while AI is not an immediate threat, it could become dangerous as models advance. Both Nailwal and Holtzman argue that decentralization is crucial in preventing AI from being used for surveillance by centralized institutions, including the state. Their insights underscore the ongoing debate about the future role of AI and the importance of maintaining individual freedoms in an increasingly digital world.