Essay · No. 001 April 2026
Back to Articles

On Ethereum / Scalability

The DA Ceiling: Why Rollups and DAS Don't Make Ethereum a Literal World Computer

A careful walk-through of the lower bound behind a narrow but serious claim: if trust-minimized activity shares one conserved data-availability channel, aggregate throughput stays bounded under stable decentralization constraints.

Irrefutable Labs ~ 6 min read

"Ethereum is the world computer" is one of the most successful slogans in crypto. The technical translation most engineers now accept is: L1 settles, rollups execute, and data availability sampling keeps verification cheap, so the ecosystem scales to planetary demand without giving up trust-minimization.

A recent paper by Brandon "Cryptskii" Ramsay - Why Ethereum Cannot Scale to a Literal "World Computer" - argues the translation is wrong, and that the slogan only survives by quietly redefining what "world computer" means. The argument is small, mostly information-theoretic, and worth working through carefully. What follows is a faithful walk-through of the core lower bound, with the formalism preserved where it matters.

"The slogan only survives if \"world computer\" means an ecosystem of labels, not a literal trust-minimized execution substrate with non-vanishing per-user service."

01The Claim Under The Claim

What "world computer" has to mean for the statement to be testable.

Ramsay draws a line between two definitions that usually get conflated:

The second definition is a branding statement. Any sufficiently labelled collection of off-chain systems with a shared settlement layer satisfies it. Only the first definition makes a scalability claim that can be proved or refuted. The paper targets the first.

02The Conserved Quantity

Data availability is the thing the architecture cannot hand-wave away.

To keep a rollup trust-minimized, two properties have to hold at the DA layer:

  1. Verification without a privileged operator.
  2. Unilateral exit or forced inclusion.

Let C be the DA capacity - the maximum raw bytes per protocol step the DA layer can make available while staying within its decentralization constraints. Let s > 0 be the minimum DA-bytes required per L2 transition to preserve those properties.

Lemma (exit-completeness implies published information).

For a verifier to reconstruct a batch's effect and compute exit claims, some exit-complete witness must be DA-available. If the DA transcript omits it, there exist two distinct histories H1 != H2 with identical DA transcripts that produce different exit claims - indistinguishable to anyone not trusting the operator. That breaks trust-minimization.

From there the ceiling drops out immediately:

Theorem 1 (DA bottleneck).

T_L2 <= C / s

Aggregate trust-minimized throughput across all rollups settling to a DA layer of capacity C with per-transition footprint s satisfies that bound.

"More rollups" does not create more trustless throughput. It partitions a shared DA budget.

03Why DAS Does Not Raise The Ceiling

Verification cost can shrink without the underlying information budget growing.

The usual counter is that data availability sampling makes DA scale with validator count. This is the step most engineers get wrong, and the paper's correction is surgical.

DAS uses erasure coding. The raw payload C is expanded by factor k, split into n = kC / s_sh shares, and any (1-beta)n shares suffice to reconstruct. Honest samplers each draw q shares uniformly at random; an adversary withholds fraction beta.

Theorem 2 (DAS detection).

Pr[detect] >= 1 - e^(-beta*h*q)

To get detection probability 1-delta, it suffices that q >= ln(1/delta) / (beta*h).

What this says - and only what this says - is that verification cost per validator shrinks as honest participation grows. It does not say that DA capacity C grows with validator count. Those are different quantities. Mixing them up is where the "DAS = unlimited DA" intuition comes from.

The actual ceiling is information-flow, not verification-cost:

Theorem 3 (information-flow DA bound).

C <= min(U* / k, D* / k)

Combine that with Theorem 1:

Theorem 4 (aggregate L2 ceiling under DAS).

T_L2 <= (1/s) * min(U* / k, D* / k)

DAS lets each validator sample fewer bytes. It does nothing about the fact that a full coded payload still has to move from some publisher to some retriever set every step, over residential-grade links if decentralization is preserved.

04The Impossibility, Stated Plainly

The asymptotic fails even if the constants get better.

Call a system world-computer-scalable if there exists a constant epsilon > 0 such that for arbitrarily large N, average per-user service is at least epsilon under the system's decentralization constraints.

Assume:

Then T_L2 is bounded by a constant independent of N. Therefore T_L2 / N -> 0 as N -> infinity, which contradicts the definition. Rollups move computation, but they cannot move information without spending DA, and DA is conserved.

05External DA Doesn't Escape The Bound

Changing the venue changes the constant, not the shape of the theorem.

The natural next move is to push DA off Ethereum onto a modular DA chain, a restaked committee, or an app-specific data network. This relocates C to C_ext. It does not change the shape of the bound.

Both Theorem 1 and Theorem 3 only require exit-complete public information and information-flow constraints under decentralization. As long as the external DA layer carries the same security assumptions as the L1 - no privileged server, independent reconstruction - you get T_L2 <= C_ext / s, with C_ext <= min(U_ext* / k, D_ext* / k).

A different constant. Same asymptotic.

06A Concrete Number

Even optimistic blob budgets only move the ceiling upward by a constant factor.

To anchor the theory: EIP-4844 fixed blob size is 131,072 bytes. EIP-7691 raised the blob target/max per block to (6, 9). Danksharding discussions frequently reference b ~= 64 blobs per step as an upper regime.

At b = 64:

C_blob = 64 * 131,072 = 8,388,608 bytes/step

Divide by any per-transition footprint s you believe in, and you get the aggregate ceiling across every rollup combined. Increasing b raises the constant. It does not change the asymptotic that per-user service vanishes as N grows.

07Second-Order Consequences

Shared publication channels bring structural side effects with them.

Once you accept that all trust-minimized activity shares one publication channel, several familiar properties stop being accidents and start being structural:

None of these require a protocol bug. They fall out of "everyone writes to one log."

08What The Argument Actually Forecloses

The paper is narrower than the slogan it is arguing against.

The paper does not claim:

It claims exactly this: any architecture in which trust-minimized activity funnels through a single conserved DA channel has aggregate throughput bounded by a constant under stable decentralization constraints, and therefore cannot provide non-vanishing per-user service at planetary adoption.

That is an information-flow result, not a cryptography result. No amount of prover optimization, circuit compression, or block-time tuning changes it. You either raise the conserved constant or you change the architecture so unrelated interactions stop sharing one publication channel.

The interesting question the paper leaves for the reader is not whether the bound is correct - the arithmetic is short enough to check in an afternoon. It is which of the two assumptions you are willing to relax: trust-minimization, or a single global ordering and DA domain. Current roadmaps mostly relax the first while continuing to brand the result as the second. Whether that is a fair trade is, at this point, a definitional question rather than a technical one.

Based on Brandon "Cryptskii" Ramsay, Why Ethereum Cannot Scale to a Literal "World Computer" (Dec 2025). All theorems and proofs are the author's; framing and prose here follow the submitted article text.