Disclosure: The views and opinions expressed herein belong solely to the authors and don’t symbolize the views and opinions of crypto.information editorials.
The second quarter of 2025 marks the fact of blockchain scaling, with the cracks within the layer 2 mannequin widening as capital continues to movement into rollups and sidechains. The unique promise of L2 was easy: scale up L1, however prices, delays, liquidity and fragmentation of the person expertise proceed so as to add up.
abstract
- L2 was supposed to increase Ethereum, however it introduces new issues because it depends on a centralized sequencer that may be a single level of failure.
- At its core, L2 handles sequence and state computations and settles all the way down to L1 utilizing optimistic or ZK rollups. Every has trade-offs. Optimistic rollups have lengthy finality and ZK rollups are computationally costly.
- The way forward for effectivity lies in separating computation and validation. It makes use of centralized supercomputers for computation and distributed networks for parallel verification, attaining scalability with out sacrificing safety.
- The “complete order” mannequin of blockchain is outdated. Shifting to native, account-based ordering unlocks huge parallelism, ends the “L2 breach,” and paves the best way for a scalable, future-proof Web3 basis.
New tasks like stablecoin funds are beginning to query the L2 paradigm, asking whether or not L2 is de facto safe and whether or not its sequencers are a single level of failure or censorship. Web3 usually results in the pessimistic view that fragmentation is inevitable.
Are we constructing our future on strong foundations or a home within the sand? L2 should face and reply these questions. In any case, if Ethereum (ETH)’s base consensus layer have been inherently quick, low cost, and infinitely scalable, the complete L2 ecosystem as we all know it right now would change into redundant. A myriad of rollups and sidechains have been proposed as “add-ons to L1” to alleviate the elemental limitations of the underlying L1. It is a kind of technical debt, a fancy piecemeal workaround that burdens Web3 customers and builders.
You may additionally like: Truthful Launch Breaks Cryptocurrency Promise | Opinion
To reply these questions, we have to break down the complete L2 idea into its primary elements and uncover a path to a extra strong and environment friendly design.
Construction of L2
Construction determines operate. It is a elementary precept of biology, and it additionally applies to laptop techniques. Figuring out the suitable construction and structure for L2 requires cautious consideration of its performance.
At its core, each L2 performs two necessary capabilities. Sequence, or ordering of transactions. In addition to calculating and proving new states. A sequencer collects, orders, and batches person transactions, whether or not it is a centralized entity or a decentralized community. This batch is then executed, leading to state updates (e.g. new token steadiness). This situation should be resolved at L1 by way of optimistic or ZK rollup for safety.
Optimistic rollups assume all state transitions are legitimate and depend on a problem interval (usually 7 days) throughout which anybody can present proof of wrongdoing. This creates an enormous trade-off in UX and slows down finality. ZK rollup makes use of zero-knowledge proofs to mathematically confirm the correctness of all state transitions earlier than reaching L1, permitting for near-instantaneous finality. The trade-off is extra computation and extra advanced development. The ZK prover itself is buggy, with doubtlessly disastrous outcomes, and formal verification of those, if potential in any respect, is extraordinarily costly.
Sequence is a governance and design selection for every L2. Some individuals favor centralized options for effectivity (or maybe censorship capabilities), whereas others favor decentralized options for extra equity and robustness. Finally, the L2 decides find out how to carry out its personal sequencing.
Producing and validating state claims could be executed rather more effectively. As soon as a batch of transactions is ordered, computing the following state turns into a pure computational process that may be carried out utilizing just one supercomputer centered solely on uncooked velocity, with none decentralization overhead. That supercomputer will also be shared between L2s.
When this new state is requested, its validation turns into a separate parallel course of. A big community of verifiers can work in parallel to confirm claims. That is additionally the very philosophy behind Ethereum’s stateless consumer and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. Irrespective of how briskly L2 (and its supercomputers) generate claims, the verification community can all the time catch up by including extra verifiers. Latency right here is exactly the verification time, which is a set minimal quantity. It is a theoretical optimum that may be verified slightly than calculated by successfully utilizing decentralization.
As soon as the sequence and state validation is full, the L2 job is sort of full. The ultimate step is to show the verified state to the decentralized community, L1, to make sure closing settlement and safety.
This final step reveals the issue that blockchain is a formidable fee layer for L2. The primary computational work is completed off-chain, however L2 has to pay a hefty premium for closing processing at L1. These face double overhead. The restricted throughput of L1 is burdened by the linear ordering of the sum of all transactions, inflicting congestion and excessive prices in transmitting information. Moreover, it should endure the finality delay inherent in L1.
For ZK rollups, it is a jiffy. For Optimistic Rollups, the problem is made much more tough by the week-long problem interval. Though obligatory, it is a safety trade-off and comes at a value.
Farewell, Web3’s “complete order” fantasy
Ever since Bitcoin (BTC), individuals have been working onerous to mix all blockchain transactions into one complete order. In any case, we’re speaking about blockchain! Sadly, this “excellent order” paradigm is a pricey fantasy and clearly overkill for L2 funds. How ironic that in one of many world’s largest decentralized networks, the world’s computer systems behave like single-threaded desktops.
It is time to transfer on. The long run shall be native, account-based ordering, the place solely transactions that work together with the identical account must be ordered, permitting for enormous parallelism and true scalability.
In fact, international ordering means native ordering, however that is additionally an extremely easy and simple resolution. After 15 years of “blockchain”, it’s time for us to open our eyes and handcraft a greater future. The scientific subject of distributed techniques has already moved from the robust consistency idea of the Nineteen Eighties (which blockchain implements) to the robust eventual consistency mannequin of 2015 that unlocks parallelism and concurrency. It is time for the Web3 business to equally depart behind the previous and comply with forward-looking scientific advances.
The times of L2 compromise are over. It is time to construct a basis designed for the long run, the place the following wave of Web3 adoption will come.
learn extra: Web3 is open and clear, however constructing on high of it’s depressing. opinion
Chen Xiaohong
Chen Xiaohong He’s the Chief Expertise Officer at Pi Squared Inc., the place he works to develop quick, parallel, and distributed techniques for funds and settlement. His pursuits embrace program correctness, theorem proving, scalable ZK options, and making use of these strategies to all programming languages. Xiaohong earned a bachelor’s diploma in arithmetic from Peking College and a doctorate in laptop science from the College of Illinois at Urbana-Champaign.