Gonka, a decentralized AI computing power network, has completed its v0.2.9 mainnet upgrade. This upgrade, implemented through on-chain governance voting, was officially executed at block height 2451000. The network has fully switched to PoC v2 as its weight allocation mechanism, with the original PoC logic being phased out. This upgrade marks a significant step forward for Gonka in terms of computing power verification mechanisms and network governance. After the upgrade, Confirmation PoC becomes the authoritative source of network results, further enhancing the verifiability and certainty of computing power contributions. Simultaneously, the network has entered a single-model operation phase. By unifying models and verification standards, heterogeneous computing power noise is reduced, providing a more stable infrastructure environment for decentralized AI inference and training. Currently, only ML Nodes running Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 and using a PoC v2 compatible image can participate in weight calculation. The transition period from Epoch 158 to 159 will be the first complete operational phase after the activation of PoC v2. According to real-time data from GonkaScan, as of February 2, 2026, Gonka's total network computing power is close to 14,000 equivalent H100 blocks, already possessing the scale characteristics of a national-level AI computing power cluster. Compared to approximately 6,000 equivalent H100 blocks when Bitfury announced its $50 million investment in early December 2025, the network's computing power scale shows a monthly growth rate of approximately 52%, a leading growth rate among similar decentralized computing networks. In terms of computing power structure, high-end GPUs such as NVIDIA H100, H200, and A100 account for over 80% of the total network computing power, demonstrating Gonka's significant advantages in high-performance computing resource aggregation and scheduling. Currently, the network nodes cover approximately 20 countries and regions in Europe, Asia, the Middle East, and North America, laying the foundation for building a global AI computing infrastructure that is resistant to single-point risks.