Author: Vitalik Buterin, ethresear; Compiled by: Songxue, Golden Finance
The main difference between Ethereum and most other (finality) proof-of-stake systems is that Ethereum tries to support very many of validator objects: We currently have 895,000 validator objects, and simple Zipf's Law analysis shows that this corresponds to tens of thousands of validator objects being unique persons/entities. The purpose of this is to support decentralization and even allow ordinary individuals to participate in staking without requiring each person to give up their agency and give control to one of a handful of staking pools.
However, this approach requires the Ethereum chain to process a large number of signatures per slot (about 28,000 today; 1,790,000 after SSF), which is a very high load. Supporting this payload requires significant technical sacrifices:
This requires a complex proof Propagation mechanisms, involving splitting proofs across multiple subnets, BLS signature operations needing to be hyper-optimized to verify those signatures, and more.
We do not have a clear alternative that is sufficiently efficient and quantum-resistant.
Fork-select fixes like view merging become more complicated because individual signatures cannot be extracted.
SNARKing signatures is difficult because of their large number. Helios needs to operate on a dedicated additional signature, called a sync committee signature.
It increases the safe minimum time slot by requiring three subtime slots in a time slot instead of two.
The signature aggregation system may seem reasonable at first glance, but in fact it creates systemic complexity that permeates every aspect.
What's more, it doesn't even achieve its purpose. The minimum requirement for staking is still 32 ETH, which is out of reach for many. Just from a logical analysis, in the long run, it seems unfeasible for a system where everyone signs in to truly provide staking for ordinary people: If Ethereum has 500 million users, and 10% of them pledge, then this means There are 100 million signatures per slot. In information theory terms, the processing cuts in this design require at least 12.5 MB of data free space per slot, roughly as much as the goal of full daksharding (!!!). Perhaps doable, but requiring the staking itself to rely on data availability sampling would come with a large complexity gain - even if that's only about 0.6% of the world's population staking, and that doesn't even begin to get into the computational problems of verifying so many signatures.
So, rather than relying on cryptographers to create magic bullets (or magic body armor) to make possible an ever-increasing number of signatures in each time slot, I propose that we make a philosophical shift: from Let go of such expectations to begin with. This would greatly expand the PoS design space and allow for a lot of technical simplification, making it more secure by allowing Helios to SNARK directly on the Ethereum consensus, and by making even boring long-standing signature schemes like Winternitz It can also be feasible to solve the problem of quantum resistance.
Why not "just use committee"?
Many non-Ethereum blockchains that face this exact problem use a committee-based approach to security. In each time slot, they randomly select N validators (for example, N is approximately equal to 1000), and these validators are responsible for completing that time slot. It’s worth reminding why this approach falls short: it lacks accountability.
To understand why, assume a 51% attack occurs. This could be a final reversal attack or a censorship attack. To launch an attack, the economic actor who controls the majority of the stake still needs to agree to carry out the attack, i.e. run the software that participates in the attack, along with all validators who are ultimately elected to the committee. The mathematics of random sampling guarantee this. However, the penalty they bear for such an attack is small because most validators who agree to the attack end up not being seen because they were not elected to the committee.
Currently, Ethereum has taken the opposite extreme. In the event of a 51% attack, a majority of the entire attacking validator set will be slashed from their stake. The current cost of the attack is approximately 9 million ETH (approximately $20 billion), and this assumes that network synchronization is disrupted in a manner that maximizes the attacker's benefit.
I think this is a high cost, and a cost that is too high, and we can make some sacrifices on this issue. Even if the attack cost is 1-2 million ETH, it is completely sufficient. Additionally, the main centralization risk currently present in Ethereum lies in a completely different place: if the minimum stake amount is reduced to close to zero, large-scale The difference in strength of the staking pool will not be too big.
This is why I advocate a mild solution: in the verifier It makes some sacrifices in accountability but still keeps the total amount of slashable ETH quite high, and in exchange it gives us most of the benefits of a smaller validator set.
What will 8192 signatures per time slot look like under SSF?
Assuming a traditional two-round consensus protocol (similar to what Tendermint uses, and what SSF will inevitably use), each participating validator requires two signatures per time slot. We need to get around this reality. I see three main ways we can do this.
Method 1: Go all-in on decentralized staking pools
Python has a very key saying: p>
There should be one - preferably only one - obvious way to do this.
Regarding the issue of staking equalization, Ethereum is currently violating this rule because we are executing two different strategies at the same time to achieve this goal:< span style="color: rgb(0, 112, 192);">(i) small-scale individual staking, and (ii) decentralized staking pools using Distributed Validator Technology (DVT). Due to the above reasons, (i) only a subset of individual stakers can be supported; there will always be many people whose minimum deposit is too large. However, Ethereum is paying a very high technical burden cost to support (i).
One possible solution is to abandon (i) and go all in (ii). We can increase the minimum pledge amount to 4096 ETH and set a total cap of 4096 validators (approximately 16.7 million ETH). Small stakers are expected to join the DVT pool: either by providing funds or becoming node operators. To prevent abuse by attackers, the node operator role needs to be limited by reputation in some way, and pools will compete by offering different options in this regard. Funding will be permissionless.
We can make collective staking in this model more "forgiving" by limiting penalties, for example. to 1/8 of the total pledge provided. This will reduce trust in node operators, although it is worth approaching with caution due to the issues outlined here.
Method 2: Two-level pledge
We create two tiers of stakers: a “heavy” tier, requiring 4096 ETH, to participate in finalization, and a “light” tier, with no minimum staking requirements (and no delays in deposits and withdrawals, and the risk of no cuts), which adds another layer of security. For a block to be final, both the heavy layer needs to finalize it and the light layer needs >= 50% of online light validators to attest to it.
This heterogeneity is beneficial for censorship resistance and attack resistance, because for an attack to be successful, both the heavy and light layers need to be corrupted. If one layer is corrupted and another is not, the chain will stop; if a heavy layer is corrupted, it can be punished.
Another benefit of this approach is that the lightweight layer can include ETH that is also used as in-app collateral. The main disadvantage is that it makes staking less equal by establishing a divide between small and large stakers.
Method 3: Taking turns to participate (i.e. committee but responsible)
We take a similar approach to the super committee design proposed here: for each slot, we select the 4096 currently active validators, and we The collection is carefully adjusted to ensure we remain safe.
However, we made some different parameter choices in order to achieve "maximum gain" within this framework. In particular, we allow validators to participate with arbitrarily high balances, and if a validator owns more than a certain amount M of ETH (which must be floating), then they participate in the committee every epoch. If a validator has N<M ETH, then they have N/M probability of being on the committee in any given time period.
An interesting lever we have here is to decouple the "weight" for incentive purposes from the "weight" for consensus purposes: the reward for each validator within the committee should be the same (at least for ≤M ETH validators) to keep the average reward proportional. We can still do a consensus count of validators on the committee, weighted by ETH. This ensures that breaking finality requires an amount of ETH equal to greater than one-third of the total amount of ETH in the committee.
Zipf’s law analysis will calculate the amount of ETH as follows:
- < p>At each quadratic level of the total balance, the number of validators is inversely proportional to that balance level, and the total balance of these validators will be the same.
Therefore, the committee will have an equal amount of ETH participating from every balance level except levels above barrier M (validators are always on the committee).
Therefore, we have Log2 (M) level in each K validator at the above level, and K+K/2+...=2K level verification above By. Therefore, K=4096/Log2(M)+2.
The largest validator will have M*k ETH. We can work backwards: if the largest validator has 2^18=262144 ETH, this means (roughly) M = 1024 and k = 256.
The total amount of pledged ETH is:
The total equity of the top 512 validators (2^18*1+2^17*2+… +2^10*2^8=2359296)
Add the smaller bet of random sampling (2^8*(2^9+2^8+2^7...) which is approximately equal to 2^ 8*2^10=2^18)
We obtained a total of 2,621,440 ETH, or the attack cost was approximately 900k ETH.
The main disadvantage of this approach is that it introduces some more complexity within the protocol to achieve consensus security in a way that allows us to still achieve consensus security when the committee changes method to randomly select validators.
The main advantage is that it retains a recognizable form of independent staking, retains a single system, and even allows the minimum stake amount to be reduced to very low levels (e.g. 1 ETH).
Summary
If we determine that after the SSF protocol, we want to stick to 8192 Signatures, which will make the lives of technical implementers easier, as well as builders of supporting infrastructure such as light clients. It becomes easier for anyone to run a consensus client, and users, staking enthusiasts, and others can immediately work on this assumption. The future load of the Ethereum protocol is no longer unknown: it can be improved in the future via hard forks, but only if developers are convinced that the technology has improved enough to be able to handle more with the same ease of signatures for each time slot.
The rest of the work will be deciding which of the three approaches above we want to take, or perhaps a completely different approach. It's going to be a question about what trade-offs we're comfortable with, especially how we solve the problems involved, like liquidity staking, which can probably be solved separately from the technical issues that are becoming easier now.