
Ethereum completed its Fusaka upgrade on December 3, marking one of the network’s most critical steps toward long-term scalability.
The upgrade builds on a series of changes since the 2022 merger and follows previous releases from Dencun and Pectra, which reduced Layer 2 fees and increased blob capacity.
Fusaka goes further by restructuring how Ethereum confirms that data is available, expanding the channel through which Layer 2 networks like Arbitrum, Optimism, and Base publish their compressed transaction batches.
It does this by using a new system called PeerDAS, which allows Ethereum to verify large volumes of transaction data without each node needing to download it.
Buterin says Fusaka is ‘incomplete’
However, Vitalik Buterin, co-founder of Ethereum, cautioned that Fusaka should not be seen as a full version of sharding, the network’s long-term scaling plan.
Buterin noted that PeerDAS represents the first functional implementation of data sharing. He noted, however, that several essential elements remain unfinished.
According to him, Ethereum can now make more data available and at a lower cost, but the complete system envisioned over the last decade still requires work on several layers of the protocol.
Given this, Buterin pointed out three shortcomings in Fusaka’s fragmentation.
First, Ethereum’s base layer still processes transactions sequentially, meaning execution throughput has not increased alongside new data capacity.
Second, block builders, specialized players who aggregate transactions into blocks, continue to upload full data payloads even if validators no longer need them, creating the risk of centralization as data volumes increase.
Finally, Ethereum still uses a single global memory pool, forcing each node to process the same pending transactions and limiting network scalability.
His message essentially presents Fusaka as the foundation of the next development cycle. He declared:
“The next two years will give us time to refine the PeerDAS mechanism, carefully scale it up while continuing to ensure its stability, use it to scale L2s, and then when ZK-EVMs are mature, turn it inward to scale Ethereum L1 gas as well.”
Glamsterdam becomes the next focal point
Fusaka’s most immediate successor is the Glamsterdam upgrade, planned for 2026.
If Fusaka expands Ethereum’s data bandwidth, Glamsterdam seeks to ensure the network can handle the operational load that comes with it.
The main feature is the separation between proponent and constructor, known as ePBS. This change moves block construction to the protocol itself, reducing Ethereum’s reliance on a handful of external block builders that currently dominate the market.
As data volumes increase under Fusaka, these builders will gain even more influence. ePBS aims to prevent this outcome by formalizing how builders bid for blocks and how validators participate in the process.
Alongside ePBS is a complementary feature called block-level access lists. These lists require constructors to specify which parts of Ethereum’s state a block will touch before execution begins.
Customer teams say this allows software to schedule tasks more efficiently and lays the foundation for future parallelization. This would be an essential step as the network prepares for heavier computing loads.
Together, the ePBS and access lists form the heart of Glamsterdam’s market and performance reforms. They are considered structural prerequisites for operating a high-capacity data system without sacrificing decentralization.
Other Ethereum Upgrades Planned
Beyond Glamsterdam is another roadmap milestone, The Verge, centered on the Verkle trees.
This system restructures the way Ethereum stores and verifies network state.
Instead of requiring full nodes to store all state locally, Verkle trees allow them to verify blocks with compact proofs, significantly reducing storage requirements. This problem has notably been partially resolved in Fusaka.
For node operators and validators, this aligns with one of Ethereum’s top priorities: ensuring that running a node remains accessible without enterprise-grade hardware.
This work is important because Fusaka’s success increases the amount of data Ethereum can ingest. Yet without changes in state management, the cost of maintaining the chain could eventually increase.
The Verge aims to ensure the opposite and that Ethereum becomes easier to run even as it processes more data.
From there, Ethereum would focus on Purge updates, a long-term effort to remove accumulated historical data and pay off technical debt, making the protocol lighter and easier to use.
Beyond these changes lies Splurge, a set of upgrades designed to refine the user and developer experience.
This would be achieved through improvements to account abstraction, new approaches to VEM mitigation, and continued cryptographic improvements.
A global settlement layer
Together, these updates form successive stages of the same ambition:
“Ethereum is positioning itself as a global settlement layer capable of supporting millions of transactions per second through its Layer 2 ecosystem while maintaining the security guarantees of its base chain.”
Long-term ecosystem numbers increasingly echo this framework. Joseph Lubin, co-founder of Ethereum, noted:
“The global economy will be built on Ethereum.”
Lubin highlighted the network’s uninterrupted operation for almost a decade and its role in settling more than $25 trillion last year.
He also noted that Ethereum currently hosts the largest share of stablecoins, tokenized assets, and real-world asset issuance, and that ETH itself has become a productive asset through staking, re-staking, and DeFi infrastructure.
His remarks reflect the broader thesis behind the current roadmap: a settlement platform that can operate continuously, absorb global financial activity, and remain open to any participant wishing to validate or complete transactions.
This future depends on three outcomes, according to CoinGecko. The network must remain scalable, allowing rollups to process large volumes of activity at predictable costs. It must remain secure, relying on thousands of independent validators whose ability to participate is not limited by hardware requirements. And it must remain decentralized, ensuring that anyone can run a node or validator without specialized equipment.


