How to understand @VitalikButerin’s new article’s thoughts on the expansion of Ethereum? Some people say that Vitalik’s orders for Blob inscriptions are outrageous.
So how do blob packets work? Why can’t Blob space be used efficiently after Cancun’s upgrade? DAS data availability sampling in preparation for sharding?
In my opinion, the performance of Cancun is usable after the upgrade, and Vitalik is worried about the development of Rollup. Why? Next, let me talk about my understanding:
1. As explained many times before, Blob is a temporary data package that is decoupled from EVM calldata and can be directly called by the consensus layer. The direct benefit is, EVM does not need to access Blob data when executing transactions, so it does not incur higher execution layer computing costs.
Currently balancing a series of factors, the size of 1 Blob is 128k, and a Batch transaction to the main network can carry up to two Blobs. Ideally, the ultimate goal of a main network block is to carry 16MB, about 128 Blob data packet.
Therefore, the Rollup project team must try to balance factors such as the number of Blob blocks, TPS transaction capacity, and Blob main network node storage costs, with the goal of using the Blob space with optimal cost performance.
Take @Optimism as an example. Currently, there are about 500,000 transactions a day. On average, a transaction is batched to the main network every 2 minutes, carrying 1 Blob data packet at a time. Why bring one? Because there are only so many TPSs that cannot be used. Of course, you can also carry two. Then the capacity of each blob will not be full, but it will increase the storage cost, which is unnecessary.
What should we do when the volume of transactions off the Rollup chain increases, for example, 50 million transactions are processed every day? 1. Compress compresses the transaction volume of each Batch and allows as many transactions as possible in the Blob space; 2. Increases the number of Blobs; 3. Shortens the frequency of Batch transactions;
2. Due to the main network block The amount of data carried is affected by Gas Limit and storage cost. 128 blobs per block is the ideal state. Currently, we do not use that many. Optimism only uses 1 every 2 minutes, leaving it to the layer2 project to improve TPS and expand market users. There is still a lot of room for growth and ecological prosperity.
Therefore, for a period of time after the Cancun upgrade, Rollup did not "volume" in terms of the number and frequency of Blobs used, as well as the use of Blob space bidding.
The reason why Vitalik mentioned Blobscription inscriptions is because this type of inscription can temporarily increase the transaction volume, which will lead to an increase in the demand for Blob usage, thus expanding the size. Using inscriptions as an example can provide a deeper understanding of the working mechanism of Blobs, Vitalik What I really want to express has nothing to do with the inscription.
Because in theory, if there is a layer2 project party that performs high-frequency and high-capacity batch transactions to the main network, and fills up the Blob block every time, as long as it is willing to bear the high cost of forged transaction batches It will affect the normal use of Blob by other layer 2, but under the current situation, just like someone buying computing power to conduct a 51% hard fork attack on BTC, it is theoretically feasible, but in practice there is no profit motive.
Therefore, the gas cost of using the second layer will be stable in the "lower" range for a long time, which will give the layer 2 market a long-term golden development window of "increasing troops and food supplies".
3. So, what if one day the layer2 market prospers to a certain extent, and the number of transactions from Batch to the mainnet reaches a huge amount every day, and the current Blob data packets are not enough? Ethereum has already provided a solution: using data availability sampling technology (DAS):
A simple understanding is that the data that originally needed to be stored in one node can be distributed among multiple nodes at the same time. For example, each The node stores 1/8 of all Blob data, and 8 nodes form a group to meet the DA capability, which is equivalent to expanding the current Blob storage capacity by 8 times. This is actually what Sharding will do in the future sharding stage.
But now Vitalik has reiterated this many times, very charmingly, and seems to be warning the majority of layer2 project parties: Don’t always complain that Ethereum’s DA capacity is expensive. With your current TPS capacity, you don’t have the capacity to use Blob data packets. Develop to the extreme, quickly increase the firepower to develop the ecology, expand users and transaction volume, and don’t always think about DA running away to engage in one-click chain creation.
Later, Vitalik added that among the current core rollups, only Arbitum has reached stage 1. Although @DeGateDex, Fuel, etc. have reached stage 2, they have not yet been familiar with the wider group. Stage 2 is the ultimate goal of rollup security. Very few rollups have reached Stage 1, and most rollups are in Stage 0. It can be seen that the development of the rollup industry really worries Vitalik.
4. In fact, in terms of the expansion bottleneck problem, the Rollup layer2 solution still has a lot of room to improve performance.
1. Use Blob space more efficiently through data compression. OP-Rollup currently has a dedicated compressor component to perform this work. ZK-Rollup’s own off-chain compression SNARK/STARK proves that it is submitted to the main network. "Compression";
2. Reduce layer2's dependence on the main network as much as possible, and only use optimistic proof technology to ensure L2 security under special circumstances. For example, most of Plasma's data is on the chain, but in Deposit and withdrawal scenarios all occur on the main network, so the main network can promise its security.
This means that layer2 should only consider important operations such as deposits and withdrawals to be strongly related to the main network. This not only reduces the burden on the main network, but also enhances the performance of L2 itself. The Sequencer mentioned earlier Parallel processing capabilities, off-chain screening, classification and pre-processing of a large number of transactions, as well as the hybrid rollup promoted by @MetisL2, OP-Rollup for normal transactions, ZK Route for special withdrawal requests, etc. all have similar considerations.
The above
It should be said that Vitalik’s article thinking about Ethereum’s future expansion plan is very enlightening. In particular, he was dissatisfied with the current development status of layer 2, optimistic about the performance space of Blobs, and looked forward to future sharding technology. He even pointed out some directions for layer 2 worth optimizing, etc.
In fact, the only uncertainty now is left to layer2 itself. How to accelerate development?