In his article, Vitalik Buterin elaborated on his layered understanding of blockchain scalability, pointing out that the difficulty of scaling a blockchain, from lowest to highest, is computation, data, and state. Vitalik stated that computation is the easiest to scale, which can be achieved through parallelization, introducing "hints" provided by block builders, or replacing large amounts of computation with proofs using zero-knowledge proofs. Data scaling is of moderate difficulty; if the system requires data availability guarantees, this requirement cannot be avoided, but it can be optimized through data splitting, code erasure (such as PeerDAS), and "graceful degradation," meaning that even when nodes have low data capacity, they can still generate blocks of a corresponding size. In contrast, state is the most difficult part to scale. Vitalik pointed out that to verify even a single transaction, a node needs the complete state; even if the state is abstracted as a tree and only the root node is stored, updating the root still depends on the complete state. Although methods for splitting the state exist, they usually require significant architectural adjustments and are not universal solutions. Based on this, Vitalik concluded that if data can replace state without introducing new centralization, it should be given priority; if computation can replace data without introducing new centralization, it should also be taken seriously.