"Ethereum is the world computer" is one of the most successful slogans in crypto. The technical translation most engineers now accept is: L1 settles, rollups execute, and data availability sampling keeps verification cheap, so the ecosystem scales to planetary demand without giving up trust-minimization.
A recent paper by Brandon "Cryptskii" Ramsay - Why Ethereum Cannot Scale to a Literal "World Computer" - argues the translation is wrong, and that the slogan only survives by quietly redefining what "world computer" means. The argument is small, mostly information-theoretic, and worth working through carefully. What follows is a faithful walk-through of the core lower bound, with the formalism preserved where it matters.
01The Claim Under The Claim
Ramsay draws a line between two definitions that usually get conflated:
-
Literal world computer. A trust-minimized execution substrate where average per-user
service rate stays above some constant
epsilon > 0as the user populationNgrows without bound, under the system's own decentralization constraints. - Ecosystem world computer. A federation of layers and operators that can be described as "running the world's code," even if end-to-end trust-minimization or per-user throughput depends on shared bottlenecks or privileged actors.
The second definition is a branding statement. Any sufficiently labelled collection of off-chain systems with a shared settlement layer satisfies it. Only the first definition makes a scalability claim that can be proved or refuted. The paper targets the first.
02The Conserved Quantity
To keep a rollup trust-minimized, two properties have to hold at the DA layer:
- Verification without a privileged operator.
- Unilateral exit or forced inclusion.
Let C be the DA capacity - the maximum raw bytes per protocol step the DA layer can make
available while staying within its decentralization constraints. Let s > 0 be the minimum
DA-bytes required per L2 transition to preserve those properties.
Lemma (exit-completeness implies published information).
For a verifier to reconstruct a batch's effect and compute exit claims, some exit-complete witness must be
DA-available. If the DA transcript omits it, there exist two distinct histories H1 != H2 with
identical DA transcripts that produce different exit claims - indistinguishable to anyone not trusting the
operator. That breaks trust-minimization.
From there the ceiling drops out immediately:
Theorem 1 (DA bottleneck).
T_L2 <= C / s
Aggregate trust-minimized throughput across all rollups settling to a DA layer of capacity C
with per-transition footprint s satisfies that bound.
"More rollups" does not create more trustless throughput. It partitions a shared DA budget.
03Why DAS Does Not Raise The Ceiling
The usual counter is that data availability sampling makes DA scale with validator count. This is the step most engineers get wrong, and the paper's correction is surgical.
DAS uses erasure coding. The raw payload C is expanded by factor k, split into
n = kC / s_sh shares, and any (1-beta)n shares suffice to reconstruct. Honest
samplers each draw q shares uniformly at random; an adversary withholds fraction
beta.
Theorem 2 (DAS detection).
Pr[detect] >= 1 - e^(-beta*h*q)
To get detection probability 1-delta, it suffices that
q >= ln(1/delta) / (beta*h).
What this says - and only what this says - is that verification cost per validator shrinks as honest
participation grows. It does not say that DA capacity C grows with validator count. Those
are different quantities. Mixing them up is where the "DAS = unlimited DA" intuition comes from.
The actual ceiling is information-flow, not verification-cost:
-
Some decentralization-eligible publisher must upload the coded payload
kCeach step. Call their minimum uplinkU*. -
Some independent retriever set must reconstruct it each step. Call their aggregate
downlink
D*.
Theorem 3 (information-flow DA bound).
C <= min(U* / k, D* / k)
Combine that with Theorem 1:
Theorem 4 (aggregate L2 ceiling under DAS).
T_L2 <= (1/s) * min(U* / k, D* / k)
DAS lets each validator sample fewer bytes. It does nothing about the fact that a full coded payload still has to move from some publisher to some retriever set every step, over residential-grade links if decentralization is preserved.
04The Impossibility, Stated Plainly
Call a system world-computer-scalable if there exists a constant epsilon > 0
such that for arbitrarily large N, average per-user service is at least epsilon
under the system's decentralization constraints.
Assume:
- Rollups are trust-minimized, so
s > 0. -
Publishers and retrievers are commodity-grade, so
U*,D*,k, andsare all constants inN.
Then T_L2 is bounded by a constant independent of N. Therefore
T_L2 / N -> 0 as N -> infinity, which contradicts the definition. Rollups move
computation, but they cannot move information without spending DA, and DA is conserved.
05External DA Doesn't Escape The Bound
The natural next move is to push DA off Ethereum onto a modular DA chain, a restaked committee, or an
app-specific data network. This relocates C to C_ext. It does not change the shape
of the bound.
Both Theorem 1 and Theorem 3 only require exit-complete public information and information-flow constraints
under decentralization. As long as the external DA layer carries the same security assumptions as the L1 - no
privileged server, independent reconstruction - you get T_L2 <= C_ext / s, with
C_ext <= min(U_ext* / k, D_ext* / k).
A different constant. Same asymptotic.
06A Concrete Number
To anchor the theory: EIP-4844 fixed blob size is 131,072 bytes. EIP-7691 raised the blob
target/max per block to (6, 9). Danksharding discussions frequently reference
b ~= 64 blobs per step as an upper regime.
At b = 64:
C_blob = 64 * 131,072 = 8,388,608 bytes/step
Divide by any per-transition footprint s you believe in, and you get the aggregate ceiling
across every rollup combined. Increasing b raises the constant. It does not change the asymptotic
that per-user service vanishes as N grows.
07Second-Order Consequences
Once you accept that all trust-minimized activity shares one publication channel, several familiar properties stop being accidents and start being structural:
-
Congestion is an externality, not a bug. Unrelated applications contend for the same DA
budget. In the worst case, the pairwise interference across
Nactive transactions isTheta(N^2)- anyOmega(N)subset can be forced to serialize against anotherOmega(N)subset through the shared channel. - MEV presupposes a privileged ordering domain. Third-party reordering, insertion, and censorship extraction only exists because there is a shared mempool and a sequencer whose choices bind unrelated parties. Remove the shared ordering domain and systemic MEV collapses to local bilateral negotiation.
- Shared execution expands the adversarial surface. A world computer where arbitrary code runs against shared state is by construction a composability-attack surface - hidden behavior, dependency attacks, cross-contract exploitation.
None of these require a protocol bug. They fall out of "everyone writes to one log."
08What The Argument Actually Forecloses
The paper does not claim:
- Ethereum is useless.
- Rollups do not help compute scale.
- DAS is broken.
- No blockchain can serve many users.
It claims exactly this: any architecture in which trust-minimized activity funnels through a single conserved DA channel has aggregate throughput bounded by a constant under stable decentralization constraints, and therefore cannot provide non-vanishing per-user service at planetary adoption.
That is an information-flow result, not a cryptography result. No amount of prover optimization, circuit compression, or block-time tuning changes it. You either raise the conserved constant or you change the architecture so unrelated interactions stop sharing one publication channel.
The interesting question the paper leaves for the reader is not whether the bound is correct - the arithmetic is short enough to check in an afternoon. It is which of the two assumptions you are willing to relax: trust-minimization, or a single global ordering and DA domain. Current roadmaps mostly relax the first while continuing to brand the result as the second. Whether that is a fair trade is, at this point, a definitional question rather than a technical one.