Keep coming
tldr;
Runtime verification audit and deposit contract verification
Runtime Verification recently completed its audit and formal verification of the eth2 deposit contract bytecode. This is an important milestone that brings us closer to the eth2 Phase 0 mainnet. Now that this work is complete, I am asking for community input and feedback. If there are any gaps or errors in the formal specification, please post an issue on the eth2 spec repository.
The formal semantics specified in the K-frame define the precise behaviors that the EVM bytecode must exhibit and prove that these behaviors are valid. These include input validations, iterative Merkle tree updates, logs, etc. Take a look here for a (semi) high-level discussion of what is specified, and dig deeper here for the full formal K specification.
I would like to thank Daejun Park (Runtime Verification) for leading this effort, as well as Martin Lundfall and Carl Beekhuizen for their many comments and critiques throughout the process.
Again, if this sort of thing is your cup of tea, now is the time to provide your thoughts and feedback on the formal verification – please take a look.
The word of the month is “optimization”
The last month has been dedicated to optimizations.
Although a 10x optimization here and a 100x optimization there may not seem so tangible to the Ethereum community today, this phase of development is just as important as any other in getting us to the finish line.
Beacon chain optimizations are essential
(why can’t we just maximize our machines with the tag chain)
The beacon chain – the heart of eth2 – is a required component for the rest of the sharded system. To synchronize any partition, whether single or multiple partitions, a client must synchronize the tag chain. So, to be able to run the beacon chain and a handful of shards on a consumer machine, it is critical that the beacon chain consumes relatively little resources, even with high validator participation (~300,000+ validators).
To this end, much of the eth2 client teams’ efforts over the past month have been devoted to optimizations, reducing the resource requirements of phase 0, the beacon chain.
I am happy to report that we are seeing fantastic progress. The following is not complete, but is rather just a preview to give you an idea of the work.
Lighthouse manages 100,000 validators like child’s play
Lighthouse took down its ~$16,000 validation testnet a few weeks ago after an attestation gossip relay loop caused a DoS of essentially the nodes themselves. Sigma Prime quickly fixed this bug and moved on to bigger and better things – that is, a 100,000 validation testnet! The last two weeks have been dedicated to optimizations to make this full-scale testnet a reality.
One of the goals of every Lighthouse progressive testnet is to ensure that thousands of validators can easily run on a small VPS with 2 CPUs and 8 GB of RAM. Early testing with 100,000 validators saw customers using 8GB of RAM consistently, but after a few days of optimization, Paul was able to reduce this to a stable 2.5GB with some ideas to reduce it even further soon. Lighthouse also saw 70% gains in hashing state, which, along with BLS signature verification, is proving to be the primary computational bottleneck for eth2 clients.
The new launch of Lighthouse testnet is imminent. Enter their discord to monitor progress
The Prysmatic testnet continues to work and synchronization is significantly improved
A few weeks ago, the current Prysm testnet celebrated its 100,000th slot with over 28,000 validators validating. Today, the testnet has exceeded 180,000 locations and has over 35,000 active validators. Maintain a public testnet while initiating updates, optimizations, stability fixes, etc. is quite an achievement.
There is a ton of tangible progress happening in Prysm. I’ve spoken with a number of validators over the past few months and from their perspective, the client continues to improve noticeably. A particularly interesting element is the improvement in sync speeds. The Prysmatic team has optimized their clients’ timing from around 0.3 blocks/second to over 20 blocks/second. This significantly improves the UX of validators, allowing them to log in and start contributing to the network much faster.
Another interesting addition to the Prysm testnet is that of Alethio new eth2 node monitor — eth2stats.io. This is an opt-in service that allows nodes to aggregate statistics in one place. This will allow us to better understand the state of the testnets and ultimately the eth2 mainnet.
Don’t trust me! Pull it down and try it for yourself.
everyone likes proto_array
The eth2 core specification frequently (knowingly) specifies expected behavior in a non-optimal manner. The specification code is rather optimized for readability of intent rather than performance.
A specification describes the correct behavior of a system, while an algorithm is a procedure for executing a specified behavior. Many different algorithms can faithfully implement the same specification. So the eth2 specification allows for a wide variety of different implementations of each component, as client teams consider a number of different trade-offs (e.g. computational complexity, memory usage, implementation complexity, etc. ).
One such example is the choice of fork — the specification used to find the head of the string. The eth2 specification specifies behavior using a naive algorithm to clearly show moving parts and edge cases – e.g. how to update weights when a new attestation arrives, what to do when a new block is finalized, etc. A direct implementation of the spec algorithm would never meet the production needs of eth2. Instead, client teams must think more deeply about IT tradeoffs in the context of their client’s operations and implement a more sophisticated algorithm to meet those needs.
Fortunately for customer teams, about 12 months ago Protolambda was implemented a bunch of different fork choice algorithmsdocumenting the pros and cons of each. Recently, Paul from Sigma Prime observed a major bottleneck in Lighthouse’s fork choice algorithm and went looking for something new. He discovered proto_array in the old proto list.
It took some work to connect proto_array to adapt to the most recent specifications, but once integrated, proto_array was found to “run orders of magnitude less time and perform significantly fewer database reads.” After the initial integration into Lighthouse, it was also quickly picked up by Prysmatic and is available in its most recent version. With the obvious advantages of this algorithm over alternatives, proto_array is quickly becoming a crowd favorite, and I definitely expect to see other teams picking it up soon!
Phase 2 research underway – Quilt, eWASM and now TXRX
Phase 2 of eth2 is the addition of state and execution to the fragmented eth2 universe. Although some fundamentals are relatively defined (e.g., communication between fragments via cross-links and Merkle proofs), the design landscape for Phase 2 is still relatively open. Quilt (ConsenSys research team) and eWASM (EF Research Team) have devoted much of their effort over the past year to researching and better defining this vast open design space, alongside ongoing work to specify and construct Phases 0 and 1.
To this end, there has recently been a flurry of public call activity, discussions and publications on ethresear.ch. There are some great resources to help you get the lay of the land. The following is just a small sample:
In addition to Quilt and eWASM, the new TXRX (The ConsenSys research team) is also devoting part of its efforts to Phase 2 research, initially focusing on better understanding the complexity of cross-shard transactions as well as researching and prototyping possible pathways to the integration of eth1 into eth2.
All Phase 2 R&D is relatively virgin territory. There is a huge opportunity here to dig deep and make an impact. Throughout this year, expect more concrete specs as well as developer playgrounds to sink your teeth into.
Whiteblock publishes libp2p gossipsub test results
This week, White block published libp2p Gossipsub test results as the outcome of a grant co-financed by ConsenSys and the Ethereum Foundation. This work aims to validate the gossipsub algorithm for eth2 uses and provide insight into performance limits to facilitate follow-up testing and algorithmic improvements.
The problem is that the results from this wave of testing seem solid, but additional testing should be done to better observe how message propagation scales with network size. Discover the full report detailing their methodology, topology, experiments and results!
Spring stacked!
This spring is full of exciting conferences, hackathons, eth2 bounties and much more! There will be a group of eth2 researchers and engineers at each of these events. Come and chat! We’d love to talk to you about engineering progress, validation on testnets, what to expect this year, and anything else that might be on your mind.
Now is the perfect time to get involved! Many customers are in the testnet phase, so there are all kinds of tools to build, experiments to run, and fun to have.
Here’s a look at the many events that should have strong eth2 representation:
🚀