Ethereum activated the Fusaka upgrade on December 3, 2025, increasing the network’s data availability capacity through blob parameter overrides that gradually expanded blob targets and maximums.
Two subsequent adjustments increased the goal of 6 blobs per block to 10, then to 14, with a maximum cap of 21. The goal was to reduce Layer 2 rollup costs by increasing the throughput of blob data, the compressed transaction packets that rollups publish to Ethereum for security and finality reasons.
Three months after data collection began, results reveal a gap between capacity and utilization. A MigaLabs analysis of more than 750,000 locations since Fusaka was activated shows that the network is not reaching the target number of 14 blobs.
Median blob utilization actually decreased after the first parameter adjustment, and blocks containing 16 or more blobs have high failure rates, suggesting reliability degradation at the limits of new capacity.
The report’s conclusion is straightforward: no further increases in the blob parameter until high blob failure rates normalize and demand materializes for the margin already created.
What Fusaka changed and when it happened
The pre-Fusaka Ethereum baseline, established via EIP-7691, set the goal at 6 blobs per block with a maximum of 9. The Fusaka upgrade introduced two sequential blob parameter replacement adjustments.
The first was activated on December 9, pushing the goal to 10 and the maximum to 15. The second was activated on January 7, 2026, pushing the goal to 14 and the maximum to 21.
These changes did not require hard forks, and the mechanism allows Ethereum to dial in capacity via client coordination rather than protocol-level upgrades.
Analysis from MigaLabs, which released reproducible code and methodology, tracked blob usage and network performance throughout this transition.
It was found that the median number of blobs per block increased from 6 before the first replacement to 4 afterward, despite the expansion of network capacity. Blocks containing 16 or more blobs remain extremely rare, occurring between 165 and 259 times each during the observation window, depending on the specific blob count.
The network has a margin that it does not use.
A parameter discrepancy: The report’s timeline text describes the first replacement as increasing the target from 6 to 12, but the Ethereum Foundation mainnet announcement and customer documentation describe the adjustment as 6 to 10.
We use the Ethereum Foundation parameters as a source: 6/9 base, 10/15 after the first override, 14/21 after the second. Nonetheless, we treat the report dataset of observed usage and failure rate models as the empirical backbone.

Failure rates increase with high blob counts
Network reliability measured by missed slots, which are blocks that fail to propagate or attest properly, shows a clear trend.
At lower blob counts, the base failure rate is around 0.5%. Once blocks reach 16 blobs or more, failure rates jump from 0.77% to 1.79%. At 21 blobs, the maximum capacity introduced during the second replacement, the failure rate reaches 1.79%, more than triple the baseline.
The analysis breaks this down into blob counts from 10 to 21, showing a gradual degradation curve that accelerates beyond the 14 blob target.
This degradation is important because it suggests that the network infrastructure, such as validator hardware, network bandwidth, and attestation timing, is struggling to handle blocks at the high end of capacity.
If demand eventually increases to meet the 14 blob target or reach the 21 blob maximum, the high failure rates could translate into significant finality delays or reorganization risk. The report defines this as a stability limit: the network can technically handle large blocks, but doing so consistently and reliably remains an open question.


Blob economics: why the floor price is important
Fusaka has not only increased its capacity. It also changed blob pricing via EIP-7918, which introduced a floor reserve price to prevent blob auctions from collapsing to 1 wei.
Prior to this change, when execution costs dominated and blob demand remained low, blob base fees could drop to the point of disappearing as a price signal. Layer 2 rollups pay blob fees to publish their transaction data to Ethereum, and these fees are meant to reflect the computational and network costs imposed by the blobs.
When fees fall near zero, the economic feedback loop breaks and stacks consume capacity without paying proportionately. This results in a loss of network visibility into actual demand.
The EIP-7918 reserve floor price ties blob fees to execution costs, ensuring that even when demand is low, price remains a meaningful signal.
This avoids the free-rider problem, in which cheap blobs encourage unnecessary usage and provide clearer data for future capacity decisions: if blob fees remain high despite increasing capacity, demand is real; if they collapse to the ground, there is free space.
Early data from Hildobby’s Dune dashboard, which tracks Ethereum blobs, shows that blob fees have stabilized after Fusaka rather than continuing the downward spiral seen in previous periods.
The average number of blobs per block confirms MigaLabs’ findings that utilization has not increased to fill the new capacity. Blocks generally contain fewer than the target of 14 blobs, and the distribution remains heavily skewed toward lower numbers.


What the effectiveness data shows
Fusaka managed to expand its technical capacity and prove that the Blob parameter replacement mechanism works without requiring controversial hard forks.
The price floor appears to be working as intended, preventing blob fees from becoming economically unnecessary. But utilization lags capacity, and reliability at the limits of new capacity shows measurable degradation.
The failure rate curve suggests that Ethereum’s current infrastructure is comfortably handling the pre-Fusaka baseline and 10/15 parameters of the first replacement, but is starting to exceed 16 blobs.
This creates a risk profile: if Layer 2 activity increases and steadily pushes blocks toward the 21-blob maximum, the network could face high failure rates that compromise finality and resistance to reorganization.
Demand patterns offer another signal. The median blob usage decreasing after the first replacement, despite increased capacity, suggests that Layer 2 rollups are currently not limited by blob availability.
Either their transaction volumes have not increased enough to require more blobs per block, or they are optimizing compression and batching to fit existing capacity rather than expanding their usage.
Blobscan, a dedicated blob explorer, displays individual rollups showing relatively consistent blob counts over time rather than increasing to exploit new margin.
The pre-Fusaka concern was that limited blob capacity would hamper Layer 2 scaling and keep rollup fees high as networks competed for data scarcity. Fusaka has resolved the capacity constraint, but the bottleneck appears to have shifted.
Rollups are not filling the available space, which means either the demand hasn’t arrived yet or other factors, such as sequencer economics, user activity, and fragmentation between rollups, are limiting growth more than blob availability was.
What comes next
Ethereum’s roadmap includes PeerDAS, a more fundamental overhaul of data availability sampling that would further increase blob capacity while improving decentralization and security properties.
However, Fusaka’s results suggest that raw capacity is not currently the major constraint.
The network has room to grow to 14/21 settings before requiring further expansion, and the reliability curve with high blob counts indicates that infrastructure upgrades may need to catch up before capacity increases again.
Failure rate data provide a clear boundary condition. If Ethereum increases its capacity while more than 16 blob blocks still have high failure rates, it risks introducing systemic instability that could surface during periods of high demand.
The safest path is to let utilization increase toward the current target, monitor whether failure rates improve as clients optimize for higher blob loads, and adjust settings only once the network demonstrates that it can reliably handle edge cases.
Fusaka’s effectiveness depends on the metric. It managed to increase its capacity and stabilize blob prices thanks to the reserve floor. This did not result in an immediate increase in utilization or resolve reliability issues at maximum capacity.
The upgrade has created room for future growth, but whether that growth will materialize remains an open question that the data has yet to answer.



