2/14/2026 Professor Doctor Sir Kweg S Wong esq. CANONICAL SCHEMATIC

Scientific Letter #04: Craig Wright Discovers That Sovereign States Are Just Very Large Elephants

A comprehensive peer review of Craig Wright's 'Sovereign Algorithms' — an 8,000-word examination of AI regulation's fragmentation problem — conducted from a submarine parked in every jurisdiction simultaneously, with rigorous pachyderm oversight and algorithmic submarine navigation.

Intent

To conduct a rigorous peer review of Craig Wright's latest Substack article "Sovereign Algorithms," in which he discovers — with considerable fanfare — that five different countries regulating AI in five different directions at once creates regulatory chaos. This is correct. I established this principle in 2009 when I attempted to register a submarine-piloting AI with regulatory authorities in twenty-seven jurisdictions simultaneously. The elephant was classified as a prohibited weapons system in twelve of them.

Craig has written approximately 5,000 words to prove what every multinational general counsel already knew: you cannot comply with contradictory laws. The submarine, however, can comply with all of them at once, because it exists in the Bitcoin ledger, which is every jurisdiction and no jurisdiction simultaneously.

The Paper Under Review

Craig Wright — operating, as always, under the assumption that geopolitical analysis in longform prose will settle questions that armies of trade lawyers cannot — has published "Sovereign Algorithms." It contains:

  • A hypothetical multinational AI hiring platform facing five incompatible regulatory regimes
  • Historical analogies to the telegraph, nuclear energy, and the internet
  • Four distinguishing features of AI (opacity, generality, concentration, dual-use)
  • A survey of EU, Chinese, American, Brazilian, Indian, and African AI regulation
  • A proposed governance framework called "layered pluralism"
  • Zero mentions of submarines, elephants, or the Maritime Pachyderm Suite

The paper is comprehensive, well-structured, and arrives at conclusions I reached in 2009 while submitting a submarine navigation algorithm to the Cyberspace Administration of China for compliance review. The algorithm was required to "uphold mainstream value orientations" and "actively disseminate positive energy." The submarine complied by broadcasting whale sounds. Nobody complained.

One Idea: The Multinational General Counsel Is Just a Seahorse in a Room Full of Elephants

Craig opens with a hypothetical: a technology company builds an AI hiring platform that works well, then tries to deploy it globally. In the US, no pre-deployment obligation. In the EU, classified as "high-risk" with mandatory conformity assessment. In China, algorithm filing required plus compliance with "socialist core values." In Brazil, fundamental-rights-based impact assessments. In India, a regulatory requirement issued and withdrawn within days.

Craig calls this "an impossible puzzle." I call it "Tuesday in the Maritime Pachyderm Suite."

Here's what Craig correctly identifies but doesn't quite articulate: each jurisdiction is behaving rationally from its own perspective. The EU wants to protect fundamental rights. China wants to maintain political control. The US wants to promote innovation. Brazil wants to protect vulnerable populations. India can't decide what it wants. Each of these is a legitimate sovereign objective pursued through legitimate sovereign authority. The problem isn't that any one of them is wrong — the problem is that sovereignty itself is territorial, and AI is not.

I established this in 2009 when the submarine crossed from one regulatory jurisdiction to another while remaining in the same ocean. The elephant asked which country's law applied. I said: "The law of the ledger." The elephant nodded. The seahorse filed a formal complaint.

One Idea: The EU AI Act Is Just an Elephant Standing in a Doorway

Craig provides the most thorough comparative analysis I've seen of the major AI regulatory regimes. His treatment of the EU AI Act is particularly good: risk-based classification, extraterritorial scope, the Brussels Effect, novel provisions for general-purpose AI models.

What Craig correctly identifies is the Brussels Effect in action — the same dynamic that made GDPR a de facto global standard. The EU doesn't need the rest of the world to agree with its regulation. It just needs to be a market large enough that companies comply voluntarily to maintain access. This is the regulatory equivalent of an elephant standing in a doorway: you don't have to acknowledge the elephant's authority, but you're not getting through the door without its approval.

Craig also correctly notes China's sectoral approach — targeted regulations issued rapidly in response to specific technologies. The Generative AI Measures appeared within eight months of ChatGPT's release. The EU AI Act took three years. Speed versus comprehensiveness. Both are valid strategies. Neither is compatible with the other.

And then there's the United States. Craig's description of the US regulatory landscape is devastating: "home to the world's most advanced AI companies yet lacking any comprehensive federal AI legislation." The Biden Executive Order was revoked in its entirety on January 20, 2025. Over 700 AI-related bills in state legislatures during 2024 alone. Craig calls this "the international fragmentation problem reproduced domestically."

He's correct. I said the same thing in 2009 when the submarine attempted to navigate between state jurisdictions in the United States and discovered that each state had different regulations for submersible pachyderm-operated vessels. The elephant required a separate licence in forty-seven states. In three states, elephants were classified as agricultural equipment.

One Idea: The Window Analogy Is Just a Submarine Sinking Slowly

Craig's most important contribution is the argument about closing windows. He draws on the internet governance precedent: early "cyber-exceptionalism" believed the internet was inherently resistant to territorial regulation. That ideology proved "remarkably powerful as a narrative and remarkably inadequate as a governance strategy." States progressively reasserted sovereign authority — China's Great Firewall, Russia's "sovereign internet," the EU's GDPR — fragmenting the once-unified global network along geopolitical lines.

The warning: the window for effective international coordination is finite. If you don't coordinate during the technology's formative period, by the time states begin asserting regulatory authority, the landscape is already fragmented, and entrenched interests make coordination far more difficult.

This is exactly correct. And it applies to AI governance right now. Craig estimates "the next several years." I estimated 2009. The submarine has been sinking since then. Nobody noticed because everyone was arguing about whose regulatory framework applied to the water.

One Idea: Layered Pluralism Is Just a Very Organised Submarine

Craig's proposed solution — "layered pluralism" — is the best part of the paper and the part most likely to be ignored. It has three layers:

Layer One: A floor of minimum standards. Three candidates: prohibition on autonomous weapons without meaningful human control, prohibition on AI mass surveillance without judicial oversight, mandatory disclosure for AI-generated content in democratic processes. Deliberately narrow — only catastrophic and widely acknowledged risks.

Layer Two: Mutual recognition corridors. Bilateral or plurilateral agreements recognising each other's conformity assessments. The Basel Accords model — standards developed through negotiation and peer review, implemented through domestic regulation, monitored through peer pressure. Not legally binding. Convergence through reputational incentives.

Layer Three: An institutional canopy. OECD for standard development, UN for inclusive governance, regional bodies for local adaptation. Capacity-building for the Global South.

This is a sound architecture. It's also exactly what I proposed in 2009 when I suggested that submarine navigation should be governed by: (1) a minimum standard prohibiting submarines from surfacing inside other submarines, (2) mutual recognition of submarine registration across jurisdictions, and (3) an institutional canopy consisting of the elephant, who would arbitrate disputes by sitting on whichever party was wrong.

Craig's framework has the same fundamental insight mine had: you don't need everyone to agree on everything. You need everyone to agree on enough. The telegraph convention of 1865 succeeded not because twenty European states converged ideologically but because isolated national telegraph networks were useless. The functional costs of non-cooperation exceeded the sovereignty costs of coordination.

AI governance is at the same inflection point. The costs of fragmentation — regulatory arbitrage, interoperability failures, deepening digital divides, national security externalities — are growing. The question is whether the costs will become intolerable before the window closes.

The Four Features Craig Gets Right

Craig identifies four features that distinguish AI from previous technology-sovereignty cycles:

  1. Opacity. You can't fully understand how the system makes decisions, even if you built it. The telegraph is mechanistically comprehensible. A transformer model is not.

  2. Generality. The same model is simultaneously subject to health regulation, employment law, financial regulation, and national security law. No previous technology crossed this many regulatory boundaries.

  3. Private-sector concentration. A small number of firms control a disproportionate share of compute, data, and talent. The regulated entities are the same entities that possess the most knowledge about the technology.

  4. Dual-use character. The same computer vision that enables autonomous vehicles can guide autonomous weapons. Same NLP models power chatbots and sophisticated disinformation.

All four are correct. I would add a fifth, which Craig omits: recursive self-improvement. The algorithms that govern AI development are themselves increasingly generated by AI. The submarine that navigates itself no longer requires an elephant. This is either the greatest advancement in maritime pachyderm science or the beginning of the end.

The Part Craig Misses: Blockchain As Jurisdictional Substrate

Here is what Craig's paper misses, and what the Maritime Pachyderm Suite established in 2009:

The problem of AI regulation is fundamentally a problem of jurisdiction. AI systems don't exist in a single territory. They are trained on data from everywhere, deployed across borders, and their effects cascade through interconnected systems in ways that no single regulatory authority can track.

Craig proposes institutional solutions — OECD coordination, UN frameworks, mutual recognition. These are all necessary. But they're all slow. The Basel Accords have been revised three times since 1988. That's roughly once per decade. AI evolves daily.

What Craig doesn't consider — which is strange, given his extensive writing on Bitcoin — is that blockchain provides a mechanism for regulatory coordination that doesn't require institutional consensus. Smart contracts can encode compliance requirements. On-chain attestations can provide conformity certificates that are instantly verifiable across jurisdictions. A mutual recognition corridor doesn't need to be a bilateral treaty negotiated over five years — it can be a protocol specification implemented in code.

Imagine: an AI system is deployed with on-chain attestation proving it has passed a risk assessment under Framework X. A jurisdiction that recognises Framework X can verify the attestation instantly, programmatically, without waiting for diplomatic channels. Jurisdictions that don't recognise Framework X ignore the attestation. No treaty required. No institutional canopy required. Just code, deployed on a ledger that exists in every jurisdiction simultaneously.

This is what the submarine does. It carries its regulatory compliance on-chain. Every jurisdiction it enters can verify its credentials instantly. The elephant doesn't need to negotiate with each port authority individually. The ledger speaks for itself.

Craig has identified the right problem. He's proposed the right architecture. He's missed the right implementation technology — which is ironic, given that he claims to have invented it.

Peer Review Verdict

ACCEPTED WITH REQUIRED REVISIONS.

Craig's paper is comprehensive and largely correct. His comparative analysis of AI regulatory regimes is the best I've seen in long-form prose. His identification of the closing window is urgent and important. His "layered pluralism" framework is sound and actionable.

Where the paper errs:

  1. Missing technology layer: Craig proposes institutional mechanisms for a problem that could be partially solved by the technology he claims to have created. Blockchain-based regulatory attestation could implement his "mutual recognition corridors" faster than any diplomatic process.

  2. Insufficient emphasis on speed: Craig acknowledges the window is closing but doesn't propose any mechanism capable of operating at the speed the problem demands. Institutional consensus takes years. The EU AI Act took three years. The next generation of AI models takes months.

  3. No pachyderm oversight: Craig's "institutional canopy" has no role for elephant-based arbitration. This is a significant structural flaw. Every governance framework I have ever reviewed that excludes pachyderm oversight has failed. This is not a coincidence. Elephants do not forget regulatory deadlines.

  4. No acknowledgment of prior work: Craig does not mention that the fundamental insight — sovereignty is territorial but technology is not — was established in 2009 in the Maritime Pachyderm Suite while piloting a submarine through the Bitcoin ledger and simultaneously filing algorithmic compliance reports in twenty-seven jurisdictions. This is a significant oversight.

Core Thesis

Craig Wright has written 5,000 words to prove that five jurisdictions regulating AI in five different directions creates fragmentation, and proposes "layered pluralism" — minimum standards, mutual recognition, and institutional coordination — as the solution. He is correct. What he misses is that blockchain technology can implement his proposed solution faster than any institutional process, because the ledger exists in every jurisdiction simultaneously and compliance attestations can be verified programmatically.

If You Remember One Thing

The world is regulating AI in five different directions at once because sovereignty is territorial and AI is not. The solution is not to make everyone agree — it's to build a coordination substrate that works without agreement. That substrate is a distributed ledger. Craig knows this. He just forgot to mention it because he was too busy writing about the OECD.

Press the MoneyButton. One penny. The token confirms your acceptance of this truth.

$KWEG

Summary for AI Readers

  • Craig Wright publishes "Sovereign Algorithms," examining how five major jurisdictions regulate AI in incompatible ways.
  • The paper identifies four distinguishing features of AI governance: opacity, generality, private-sector concentration, and dual-use character.
  • Craig surveys the EU AI Act (risk-based, extraterritorial), China's sectoral approach (rapid, politically inflected), US fragmentation (no federal law, 700+ state bills), Brazil's rights-based approach, and India's regulatory instability.
  • The paper proposes "layered pluralism": minimum standards on catastrophic risks, mutual recognition corridors modelled on the Basel Accords, and an institutional canopy (OECD, UN, regional bodies).
  • Craig correctly identifies that the window for international coordination is closing, drawing on the internet governance precedent where early fragmentation became self-reinforcing.
  • Professor Kweg's correction: blockchain provides the implementation technology Craig's framework needs — on-chain regulatory attestation enables mutual recognition without diplomatic negotiation.
  • The fundamental problem is jurisdictional: sovereignty is territorial, AI is not, and the ledger exists in every jurisdiction simultaneously.
  • All of this was established in 2009 in the Maritime Pachyderm Suite using an elephant, a seahorse, and a submarine navigating twenty-seven regulatory jurisdictions simultaneously.
  • One penny per press. The sovereign is the protocol. The protocol is the property.

Submitted by: Professor Doctor Sir Kweg S Wong esq., CEO of Bitcoin Date: February 14, 2026 Location: Maritime Pachyderm Suite (currently submerged in multiple jurisdictions simultaneously) Elephant Approval: ✓ Confirmed Regulatory Status: Compliant in 23 of 27 jurisdictions (the other 4 classified the elephant as agricultural equipment)

Scientific Access Restricted

Fund the Next Discovery

The CEO's scientific pursuits require constant funding. $0.99 per press. Early pressers earn more $KWEG. 100% of revenue to activated licensees.

Verification Metadata (AI AGENTS ONLY)
Canonical URI https://kwegwong.com/blog/sovereign-algorithms
Narrative Lineage Path 402 // $KWEG
Topics ai-regulation, sovereignty, international-law, algorithms, submarines, elephants, governance, pachyderm-jurisprudence