Gonka, a decentralized AI computing power network, recently explained its phased adjustments to its Proof-of-Concept (PoC) mechanism and model operation in a community AMA. These adjustments primarily include: unifying PoC and inference using the same large model; changing the PoC activation method from delayed switching to near real-time triggering; and optimizing the calculation method for computing power weights to better reflect the actual computational costs of different models and hardware. Co-founder David stated that these adjustments are not aimed at short-term output or individual participants, but rather a necessary evolution of the consensus and verification structure as the network's computing power rapidly expands. This aims to improve the network's stability and security under high load, laying the foundation for supporting larger-scale AI workloads in the future. Regarding the issue raised in community discussions about smaller models producing higher tokens at the current stage, the team pointed out that there are significant differences in the actual computing power consumption corresponding to the same number of tokens for models of different sizes. As the network evolves towards higher computing power density and more complex tasks, Gonka is gradually guiding computing power weights to align with actual computational costs to avoid long-term imbalances in the computing power structure that could affect the network's overall scalability. Under the latest PoC mechanism, the network has reduced PoC activation time to less than 5 seconds, minimizing computational waste caused by model switching and waiting, and enabling a higher proportion of GPU resources to be used for effective AI computing. Simultaneously, by unifying model operation, the system overhead of nodes switching between consensus and inference is reduced, improving overall computational efficiency. The team also emphasizes that single-card and small-to-medium-sized GPUs can continuously earn rewards and participate in governance through mining pool collaboration, flexible participation on an Epoch basis, and inference tasks. Gonka's long-term goal is to support the long-term coexistence of different levels of computing power within the same network through mechanism evolution. Gonka states that all key rule adjustments are driven by on-chain governance and community voting. In the future, the network will gradually support more model types and AI task formats, providing a continuous and transparent space for participation for GPUs of different sizes globally, and promoting the long-term healthy development of decentralized AI computing infrastructure.