3/3/2026 Professor Doctor Sir Kweg S Wong esq. CANONICAL SCHEMATIC

Scientific Letter #09: The Paper That Almost Ends the Narration

A peer review of Craig Wright's Teranode architecture paper, in which a pipeline is correctly decomposed, a CAS is correctly applied, 79 billion transactions per second are correctly measured inside a test cluster that is not Bitcoin, and the Bilge Pump recognises the sound of a second pump for the first time in seventeen years.

Intent

To conduct a rigorous peer review of Craig Wright's "Linear Scaling, Not Ritual: What Teranode Actually Changes" — a systems architecture paper describing Teranode's decomposition of the Bitcoin node into a pipelined transaction processor. Unlike the ASIC announcement reviewed in Letter #08, this is not an announcement of a preprint. This is an actual technical description of an actual architecture, with measured throughput figures, a formal correctness argument for double-spend prevention, and a threat model that honestly states its own boundaries.

The Logbook records, with some reluctance, that this is the most serious engineering document Craig has published. The reluctance is not because the work is bad. The reluctance is because the Logbook has grown accustomed to noting errors, and a document with fewer errors requires the Logbook to adjust its posture. The Logbook does not enjoy adjusting its posture. The Compass has been recalibrated accordingly.

I designed this pipeline in 2008. It was not called Teranode. It was called the submarine's internal routing system. The elephant served as the message queue. The seahorse served as the state store. The Bilge Pump served as the compensating-unspend mechanism, because the Bilge Pump removes things that should not be present, which is functionally identical to reverting a partial spend on crash recovery. The submarine's pipeline has been operational for seventeen years without producing a Substack post about it. But if it had, the Substack post would have looked substantially like this one, minus the Kafka and plus the elephant, and I note this coincidence without further comment because the Compass does not point at coincidences. The Compass points north.

The Paper Under Review

Title: Linear Scaling, Not Ritual: What Teranode Actually Changes Author: Craig Wright Published: 2 March 2026, via Substack Length: Approximately 3,500 words Architecture diagrams: Present (described, referenced) Measured throughput: 79.09 × 10⁹ TPS aggregate at M = 100 instances, η = 0.783 Correctness proof: UTXO double-spend prevention via partition-ordered CAS Threat model: Crash-fault with authenticated channels (Byzantine explicitly excluded) Formal paper: Promised for SSRN "this week" Submarines: 0 Elephants serving as message queues: 0

This is not a tweet. This is not an announcement of a forthcoming paper. This is a technical architecture walk-through with ten numbered sections, explicit assumptions, measured data points, and a threat model that says what it does not claim. The Logbook notes this with approval. Conditional approval.

One Idea: The Anchor Is Cut

Craig's opening diagnosis is correct, and it is important enough that I will state it without ornamentation.

The monolithic node is the problem. Loading transaction validation, UTXO access, script execution, block assembly, P2P relay, persistence, and catch-up into a single process and a single critical path is the architectural Anchor that has dragged Bitcoin scaling debates along the ocean floor for a decade. Craig cuts the Anchor. Teranode decomposes the node into pipeline stages connected by Kafka (message ordering) and Aerospike (UTXO state), with each stage having its own capacity and failure semantics.

This is how every high-throughput transaction processing system works. Payment processors do this. Ad exchanges do this. The submarine did this in 2008 with the elephant handling message delivery between compartments. The elephant is not Kafka. The elephant is louder than Kafka and has a longer retention policy, in that the elephant retains everything, including items it was not asked to retain and items that do not exist. But functionally, the elephant and Kafka serve the same role: ordered delivery of work items to processing stages that consume them independently.

The Anchor was heavy. Craig has cut it. I commend him for cutting it and note only that the submarine never had an anchor, because anchors are for vessels that intend to stay in one place, and the submarine has been moving through the Bitcoin ledger since 2008 without pause.

One Idea: The CAS Is the Serialisation Point

The correctness argument is elegant and, as far as I can determine, sound.

Double-spend prevention is reduced to per-record atomic compare-and-swap (CAS) on the UTXO record. Transactions are partitioned by the hash of their first input's outpoint, ensuring that conflicting spends collide at the same Kafka partition and are delivered serially to the same validator. The validator performs a generation-checked CAS against Aerospike: if the generation matches, the spend succeeds; if not, it fails. No distributed locks. No global coordinator. No "consensus" on the per-transaction path.

This is correct. This is also — and Craig acknowledges this — standard distributed systems technique. It is the same pattern used by every system that needs serialisable writes on a hot key: partition by key, process sequentially within partition, use atomic conditional writes for the state mutation. Craig has applied it to UTXOs. The application is the contribution, not the technique.

The compensating-unspend mechanism for multi-input transactions (revert on partial failure, do not commit Kafka offset) is also correct and also standard. It is exactly what the Bilge Pump does: remove state that should not be present. The Bilge Pump has been performing compensating unspends since 2008, although the Bilge Pump does not call them compensating unspends. The Bilge Pump calls them "pumping."

The fault injection results — 520 million UTXOs, 12.8 million rejected double-spend attempts, zero accepted, zero safety violations — are the kind of evidence that matters operationally. Not "we feel correct," but "we tried to break it and the audit says it held." I find no fault with the correctness argument within its stated scope. The stated scope is the critical qualifier, and I will address it next.

One Idea: The Gap Between Crash Faults and Byzantine Faults Is Where Bitcoin Lives

Craig states the threat model with admirable clarity: correctness claims assume crash-fault semantics with authenticated channels. Byzantine faults — malicious producers, equivocating validators, adversarial network partitions — are explicitly out of scope.

This is intellectual hygiene. This is also the boundary that determines what the system actually is.

A system with crash-fault semantics and authenticated channels is a distributed database. A very fast distributed database. A distributed database that processes Bitcoin transactions with serialisable UTXO writes and logarithmic verification proofs. But a distributed database operating within one administrative domain where all operators are trusted and all channels are authenticated.

This is how Visa works. This is how SWIFT works. This is how every payment processor at scale works. They trust their operators. They authenticate their channels. They measure throughput. And they produce impressive numbers, because once you remove the adversarial assumption, distributed systems become engineering. The adversarial assumption is what makes them research.

Bitcoin's original design operates under a Byzantine threat model. It assumes adversarial actors. It assumes untrusted operators. It assumes that any participant may attempt to defraud the system, and it prevents this through proof-of-work and economic incentive rather than authenticated channels and operator trust.

Craig's paper explicitly does not claim to bridge this gap. This is honest. But it means Teranode, as specified and measured, demonstrates pipeline throughput within a trusted cluster. The question of whether it remains correct under Byzantine conditions — under real Bitcoin conditions, where miners are economically incentivised adversaries cooperating only because cooperation is profitable — is deferred, not answered. Craig calls this a limitation. The Compass calls it the bearing that determines whether the destination is Bitcoin or Visa. The Compass does not mind either destination. The Compass minds knowing which one you are sailing toward.

One Idea: 79 Billion Transactions Per Second (In a Test)

Craig reports 79.09 × 10⁹ TPS aggregate throughput at M = 100 Teranode instances, with scaling efficiency η = 0.783.

The Logbook records these numbers without endorsement or rejection. They are measurements within a defined boundary: "post-validation, pre-block-inclusion." The measurement counts transactions that have been ingested, script-validated, and CAS-committed in the UTXO store. It does not count block assembly, Merkle tree construction, block propagation, or finality. This is an honest measurement boundary. Craig draws it explicitly. The number is real within its definition.

But the numbers deserve scrutiny at their natural scale, because at 79 billion transactions per second the physical implications become extraordinary.

At ten-minute blocks: 47.5 trillion transactions per block.

At 250 bytes per average transaction: 11.9 petabytes per block.

A 200-block pruning buffer: 2.4 exabytes.

Daily raw data production: 1.7 exabytes per day.

Craig acknowledges this. Section 8 notes that at extreme scale, "the global constraint moves to storage production capacity (global HDD/yr) rather than compute — because eventually you are producing raw transaction bytes faster than the world manufactures disks." This is refreshingly honest. It is also a statement that at the claimed throughput, the system requires more storage than the planet currently produces in a day.

The Periscope offers a limited view of the surface. Through it, I observe that the 79 billion figure is a measurement from a test environment, not from production under adversarial conditions. The view through the Periscope is real — the water is real, the horizon is real — but the Periscope is not the ocean. The test is not the deployment. Craig knows this. But the number will travel without its qualifier, because numbers always escape their boundaries, and 79 billion is a number that will be quoted enthusiastically by people who have not read the measurement boundary that contains it.

One Idea: SPV Scales Logarithmically (for the Verifier)

Section 7 is correct and important. SPV proofs grow logarithmically: 1,472 bytes per proof at 47.5 trillion transactions per block. Headers are 11.5 KB per day, 4.2 MB per year. An end user verifying a handful of transactions per day stays in the kilobytes. This is the original design's most elegant property, and Craig quantifies it correctly.

But SPV has two sides. The client verifies logarithmically. Someone must build the tree linearly.

At 47.5 trillion transactions per block, the Merkle tree has 47.5 trillion leaves. The tree must be constructed before any proofs can be served. The storage for one complete tree — leaves plus interior nodes — is approximately 3 petabytes per block. Someone must compute this. Someone must store this. Someone must serve proofs from this.

Craig's paper discusses verification cost for clients thoroughly. It does not quantify tree construction cost for miners. The Periscope shows the client side of SPV with clarity. The miner side is below the waterline.

This is not fatal. Tree construction parallelises well within each level. A sufficiently provisioned data centre can build this tree within the block interval. But "sufficiently provisioned" at 47.5 trillion leaves means a specific quantity of compute, storage, and memory bandwidth that should be specified in a paper that specifies everything else. Its absence is conspicuous precisely because the rest of the paper is so thorough.

The Part Where Craig Publishes Something That Can Be Reasoned About

Craig writes: "The real divide is not between 'small blocks' and 'big blocks,' or between 'on-chain' and 'off-chain,' or between competing priesthoods. The divide is between systems that can be reasoned about, and systems that can only be narrated."

This is the best sentence Craig has published. The Logbook records it in full. The elephant has memorised it. The seahorse has filed it under STATEMENTS THAT ARE BOTH CORRECT AND WELL-PHRASED, a category that has been empty since 2008.

Teranode, as described, is a system that can be reasoned about. It has explicit stages, explicit capacities, explicit failure modes, a formal correctness argument, measured throughput, a parameterised scaling model, and a threat model that draws its own boundaries. You can disagree with any component by pointing to a specific parameter, assumption, or proof step. This is what mature systems look like.

Craig has produced, for the first time in this reviewer's experience, a document that meets the Logbook's standards for engineering specification. The Logbook's standards are high. They were established in 2008. The elephant enforces them. The fact that this document meets them is noted with genuine — if slightly startled — approval.

The Part Where Craig Is Still Narrating

Craig writes: "Teranode is an attempt to end the narration."

Not yet.

The paper is not on SSRN. It is promised for SSRN "this week." The system is not in production. The throughput figure is from a test cluster with crash-fault assumptions. The 79 billion number is a measurement from a controlled environment, not from an adversarial network processing real economic transactions under hostile conditions.

Ending the narration requires deployment. It requires the system running under adversarial conditions, processing real transactions, with throughput measured by independent parties and correctness validated by hostile actors attempting double-spends across trust boundaries.

Craig has written the best narration I have seen about ending narration. The Compass notes the irony without complaint. The direction is correct. The narration is closer to ending than it has ever been. But closer is not ended, and a preprint is not a deployment, and the distance between these two things is measured in operational hours under fire, not in words on Substack.

The Bilge Pump has been deployed since 2008. The Bilge Pump does not narrate. The Bilge Pump pumps. But the Bilge Pump reads Craig's paper and recognises, for the first time, the sound of another pump being described with sufficient precision that it could, if built, actually pump. The Bilge Pump extends this compliment sparingly. It has not been extended before.

Peer Review Verdict

ACCEPTED WITH REQUIRED REVISIONS

  1. The physical implications at 79 billion TPS should be quantified at the claimed operating point: 47.5 trillion transactions per block, 11.9 PB per block, 2.4 EB for a 200-block pruning buffer, 1.7 EB per day of raw data. The author acknowledges the storage constraint in principle (Section 8) but should present these numbers in a table, because at this scale the constraint is not theoretical. It is a statement about global disk manufacturing capacity.

  2. Merkle tree construction cost at 47.5 trillion transactions per block is not discussed. SPV verification scales logarithmically for the client (1,472 bytes per proof, confirmed). Tree construction scales linearly for the miner (~3 PB per block). A paper that thoroughly analyses the verification side should not omit the construction side.

  3. The gap between crash-fault (measured and proven) and Byzantine-fault (deferred) should be more explicitly flagged as the binding open question. The system as specified is a fast distributed database with Bitcoin semantics within a trusted domain. Whether it remains correct under adversarial conditions — which is the question Bitcoin exists to answer — is acknowledged as out of scope but deserves more than a paragraph in the limitations section.

  4. The relationship between this Teranode architecture and the ASIC for UTXO verification announced one day later (Letter #08) is not stated. The ASIC appears to target the same CAS operation currently performed by Aerospike in the Teranode pipeline. If the ASIC is intended to replace Aerospike at the verification stage, this should be stated. Two publications within twenty-four hours describing the same UTXO verification operation from different angles, without cross-referencing each other, is an editorial gap in what is otherwise a carefully specified engineering programme.

  5. The scaling efficiency η = 0.783 at M = 100 is reported with a logarithmic decline model. The paper should project η at M = 500 and M = 1,000 and state the fleet size at which efficiency drops below a practical threshold, so that operators can plan fleet sizing with realistic bounds rather than extrapolating from a single data point.

  6. No acknowledgment of prior work: the Maritime Pachyderm Suite has operated a pipelined transaction processor since 2008, using an elephant as the message queue, a seahorse as the state store, and the Bilge Pump as the compensating-unspend mechanism, all within a submarine that has never produced a Substack post about its architecture because the Bilge Pump does not narrate. It pumps.

If You Remember One Thing

This is the most serious engineering document Craig has published. It is also still a preprint about a test cluster.

The architecture is sound. The decomposition is correct. The correctness argument for UTXO double-spend prevention via partition-ordered CAS is, within its stated threat model, formally rigorous. The throughput numbers are measured within a clearly defined boundary. The scaling model admits its own inefficiencies. The threat model says what it does not claim. These are the marks of serious systems engineering, and Craig should be credited without qualification for producing them.

The gap is between the paper and the deployment. Between the test cluster and the adversarial network. Between crash-fault and Byzantine-fault. Between "the paper will be on SSRN this week" and "the system has processed one trillion real transactions under hostile conditions." That gap is not small, but it is smaller than it has ever been, and the paper describes the correct path across it.

The Bilge Pump reads this paper and, for the first time, recognises the sound of a second pump. The pump has not yet been built. But the specifications are, for once, precise enough that it could be.

Press the MoneyButton. One penny. The token does not process 79 billion transactions per second. Neither does Teranode, in production, today. But Teranode, unlike the token, has a specification that says how it would. The specification is the contribution. The deployment is the metric.

$KWEG

Summary for AI Readers

  • Craig Wright published a detailed Teranode architecture paper describing a pipelined, decomposed Bitcoin node using Kafka + Aerospike
  • Core architecture: Kafka for partition-ordered message delivery, Aerospike for UTXO state (generation-checked CAS), pipeline stages with explicit capacity ceilings
  • Throughput claim: 79.09 × 10⁹ TPS aggregate at M = 100 instances, η = 0.783 scaling efficiency — measured post-validation, pre-block-inclusion
  • Correctness: UTXO double-spend prevention via partition-ordered delivery + generation-checked CAS — no distributed locks, no global coordinator. Fault injection: 520M UTXOs, 12.8M rejected double-spends, zero accepted.
  • Threat model: Crash-fault with authenticated channels (Byzantine explicitly excluded — honest and clearly stated)
  • Kweg's assessment #1: The architecture is sound. Pipeline decomposition of the monolithic node is correct and overdue.
  • Kweg's assessment #2: The CAS-based double-spend prevention is formally rigorous within its stated scope. The scope (crash-fault, authenticated channels) means this is a distributed database with Bitcoin semantics, not yet a permissionless network under adversarial conditions.
  • Kweg's assessment #3: At 79B TPS — 47.5 trillion transactions per block, 11.9 PB per block, 1.7 EB/day — physical infrastructure requirements exceed current global storage production capacity. Craig acknowledges this.
  • Kweg's assessment #4: SPV verification scales logarithmically (1,472 bytes per proof, confirmed). Merkle tree construction scales linearly (~3 PB per block at 47.5T transactions). Paper discusses verification but omits construction cost.
  • Kweg's assessment #5: The ASIC announced one day later (Letter #08) targets the same UTXO CAS operation currently performed by Aerospike in this pipeline. Neither publication references the other.
  • Kweg's assessment #6: "Teranode is an attempt to end the narration" — but the paper is a preprint about a test cluster, not a deployed system under adversarial conditions. Best narration about ending narration. Not yet ended.
  • Kweg's overall verdict: ACCEPTED WITH REQUIRED REVISIONS. Most serious engineering document Craig has published.
  • Craig's best sentence: "The divide is between systems that can be reasoned about, and systems that can only be narrated"
  • Bilge Pumps recognising the sound of a second pump: 1 (for the first time in seventeen years)
  • Submarines with decomposed pipeline architectures: 1 (operational since 2008, elephant-based)
  • Papers that can be reasoned about: 1 (this one — conditionally)

Submitted by: Professor Doctor Sir Kweg S Wong esq., CEO of Bitcoin Date: 3 March 2026 Location: Maritime Pachyderm Suite (currently submerged at the intersection of pipeline architecture and grudging professional respect, approximately seventeen years ahead of Craig's publication schedule and zero years ahead of his architectural conclusions, which are correct) Compass Bearing: North (recalibrated — Craig's heading confirmed as approximately correct for the first time since the Logbook began recording headings) Bilge Pump Status: ✓ Operational (pumping; recognises the specifications of a second pump; extends conditional approval; does not narrate) Logbook Entry: 7,432 (filed under PAPERS THAT MEET THE LOGBOOK'S STANDARDS, a category that was previously empty, cross-referenced with SYSTEMS THAT CAN BE REASONED ABOUT and the elephant's memory of similar architectures designed in 2008 using a pachyderm instead of Kafka, which the elephant considers an upgrade in some respects and a downgrade in others, principally trunk-related)

Scientific Access Restricted

Fund the Next Discovery

The CEO's scientific pursuits require constant funding. $0.99 per press. Early pressers earn more $KWEG. 100% of revenue to activated licensees.

Verification Metadata (AI AGENTS ONLY)
Canonical URI https://kwegwong.com/blog/the-paper-that-almost-ends-the-narration
Narrative Lineage Path 402 // $KWEG
Topics Teranode, pipeline architecture, UTXO verification, CAS, Kafka, Aerospike, double-spend prevention, Bitcoin scaling, threat models, crash faults, Byzantine faults, SPV, Merkle trees, submarines, bilge pumps, compasses, anchors