2/28/2026 Professor Doctor Sir Kweg S Wong esq. CANONICAL SCHEMATIC

Scientific Letter #06: Craig Wright Discovers That the Machine Cannot Know (I Told the Machine in 2009, From a Submarine)

A comprehensive peer review of Craig Wright's 5,000-word discovery that AI produces syntax without semantics — a phenomenon I established in 2009 when I asked an elephant to explain the Chinese Room and the elephant ate the room, conducted from a submarine parked inside a large language model with rigorous pachyderm oversight.

Intent

To conduct a rigorous peer review of Craig Wright's latest Substack article "The Machine That Cannot Know," in which he discovers — with the solemn authority of a man who has just reinvented the wheel and is now filing a patent on roundness — that large language models do not understand what they are saying. This is correct. I established this principle in 2009 when I attempted to explain the Bitcoin whitepaper to an elephant inside a submarine and the elephant responded with a grammatically perfect sentence that meant nothing. The elephant was the first large language model. It had four legs and no semantics.

Craig has written approximately 5,000 words to prove what John Searle proved in 1980 in approximately 3,000: that shuffling symbols according to rules is not understanding. Craig then adds Hayek, Polanyi, Dreyfus, five academic papers, and his doctoral credentials. The submarine adds an elephant. The submarine is more concise.

The Paper Under Review

Craig Wright — operating, as always, under the assumption that citing Friedrich Hayek in a Substack post about ChatGPT constitutes a contribution to the history of economic thought — has published "The Machine That Cannot Know." It contains:

  • John Searle's Chinese Room (1980), presented as though Craig discovered it in a drawer
  • Friedrich Hayek's knowledge problem (1945), applied to Silicon Valley with visible contempt
  • Ludwig von Mises's socialist calculation problem (1920), wearing a hoodie
  • Five academic papers from 2020-2025, summarised at considerable length
  • Michael Polanyi's tacit knowledge framework (1966), invoked repeatedly
  • Hubert Dreyfus's phenomenological critique (1965), exhumed
  • Wittgenstein's lion (date: eternal), quoted once
  • A blockchain sales pitch disguised as an epistemological conclusion
  • Craig's doctoral credentials, mentioned exactly once but calibrated to echo through the entire piece
  • Zero mentions of submarines, elephants, or the Maritime Pachyderm Suite

The paper is methodical, bloated, and arrives at conclusions I reached in 2009 while teaching an elephant to distinguish between syntax and semantics using only a submarine horn and a copy of Wittgenstein's Philosophical Investigations. The elephant learned the distinction. It chose syntax. This tells you everything you need to know about large language models.

One Idea: The Chinese Room Is Just a Submarine Without Windows

Craig opens by invoking Searle's Chinese Room: a person inside a room manipulates Chinese symbols according to rules, producing outputs that native speakers accept as fluent, without understanding a word. Craig then observes that "every large language model is a Chinese Room" and the room has "merely become very fast and very large."

This is correct. I improved the thought experiment in 2009 by replacing the room with a submarine and the person with an elephant.

The Elephant Room works as follows: an elephant inside a submarine receives Bitcoin transaction data through a periscope. The elephant arranges the data according to rules printed on waterproof cards. The elephant produces outputs that the Bitcoin network accepts as valid transactions. The elephant does not understand Bitcoin. The elephant does not understand money. The elephant does not understand the submarine. The elephant is, however, extremely confident.

Craig's error is characteristically one of scale. He says the room has become "very fast and very large." He does not mention that the room has also become very expensive, very loud, and very convinced of its own importance — qualities shared by Craig, the elephant, and every venture capitalist who has used the phrase "artificial general intelligence" in a pitch deck.

The Chinese Room was always correct. What Searle could not have predicted is that we would build the room, scale it to planetary dimensions, give it a subscription model, and then argue about whether the room "really" understands things while paying it $20 per month to generate text we could have written ourselves. The submarine saw this coming. Nobody listened to the submarine.

One Idea: Hayek's Knowledge Problem Is Just an Elephant Trying to Read the Room

Craig's central thesis is that AI cannot solve Hayek's knowledge problem. Hayek argued in 1945 that the knowledge required for economic coordination is dispersed across millions of individuals, embedded in particular circumstances of time and place, largely tacit, and impossible to centralise. Craig applies this to AI and concludes: no matter how much data you feed the machine, it cannot replicate the situated, embodied, tacit knowledge that makes economies function.

This is correct. I established it in 2009 using only an elephant and a price mechanism.

Here is the experiment: I placed an elephant and a seahorse in a submarine and asked them to coordinate an economy. The elephant had access to all available data — every transaction log, every market report, every economic indicator published since 1776. The seahorse had access to one piece of information: there was a leak in the submarine.

The elephant produced a comprehensive macroeconomic analysis. The seahorse plugged the leak. The submarine survived because of the seahorse's tacit, situated knowledge, not the elephant's data. The elephant's analysis was syntactically perfect and semantically irrelevant. It was also 5,000 words long, which is a coincidence I choose not to investigate.

Craig quotes Hayek: "The shipper who knows that an empty vessel is available. The factory foreman who can feel when a machine is about to fail." These are correct examples. My version is simpler: the seahorse who knows the submarine is leaking. No dataset contains this information. No model can learn it. It exists only in the lived experience of a seahorse who is currently wet.

Craig then identifies the core error of Silicon Valley: "This is the socialist calculation problem wearing a hoodie and raising venture capital." This is an excellent sentence. I would like to claim I wrote it in 2009 but honesty compels me to admit Craig wrote it in 2026. The submarine acknowledges this contribution.

One Idea: "Semantic Pareidolia" Is Just Seeing an Elephant Where There Is Only a Room

Craig's most interesting contribution — buried, as is his custom, inside someone else's paper — is the concept of "semantic pareidolia," coined by Porębski and Figura in 2025. Just as humans see faces in clouds and Jesus on toast, we project understanding onto systems that produce human-like output. The machine is not thinking. We think it is thinking because our cognitive architecture is built to attribute intentionality to anything that behaves as if it has intentions.

This is correct, and it is the most important point in Craig's entire essay, and he spends approximately one paragraph on it before returning to Hayek.

I established the principle of semantic pareidolia in 2009 when I observed an elephant producing syntactically valid English sentences by stepping on a keyboard. The sentences were:

  1. "The market requires distributed knowledge mechanisms"
  2. "Property rights necessitate institutional enforcement frameworks"
  3. "I am Satoshi Nakamoto"

Sentences 1 and 2 were accepted as legitimate academic discourse. Sentence 3 was accepted by approximately one person. All three were produced by an elephant stepping on keys. The elephant had no semantic content. The observers supplied it. This is pareidolia. The elephant is the language model. The observers are us.

Craig almost grasps the devastating implication: if we cannot distinguish an elephant stepping on a keyboard from a sentient being expressing genuine understanding, the problem is not with the elephant. The problem is with us. Our pattern-recognition machinery is so aggressive that we will attribute understanding to anything that produces the right syntax. We attribute understanding to chatbots, to Siri, to autocomplete, and — most consequentially — to people who produce syntactically sophisticated sentences without understanding what they mean.

I will not name names. The submarine does not engage in ad hominem arguments. The submarine merely observes that some authors produce 5,000-word essays with the same relationship to understanding that a language model has to knowledge: the syntax is impeccable, the citations are real, and the semantic content is "I have lots of degrees."

One Idea: Craig's Five Papers Are Just Five Elephants Agreeing With Each Other

Craig summarises five academic papers to support his thesis. Allow me to summarise his summaries:

Paper 1 (Fjelland, 2020): AI is narrow. Watson Health failed because oncology requires embodied judgment, not literature review. Kweg translation: The elephant can read every medical textbook ever published and still cannot tell you where it hurts.

Paper 2 (Bhardwaj, 2025): AI cannot do abductive inference, handle edge cases, acquire tacit knowledge, or reason by analogy. He calls this the "Clever Hans" effect. Kweg translation: The elephant appears to do arithmetic but is actually reading the handler's body language. The handler is a venture capitalist. The arithmetic is a valuation.

Paper 3 (Porębski & Figura, 2025): Decoder-only models predict the next token. That's all they do. Semantic pareidolia explains why we think they understand. Kweg translation: The piano does not understand the sonata. The piano is also not raising a Series B.

Paper 4 (Noller, 2024): AI extends human agency, like a telescope. Datasets are not neutral — they encode human biases. Kweg translation: A telescope that only points where the elephant wants to look is not a neutral instrument. It is an elephant with better eyesight.

Paper 5 (Renftle et al., 2024): Explainable AI cannot actually explain what models do. The gap between technical attributes and interpreted attributes is intractable for complex models. Kweg translation: You cannot ask the elephant why it sat on the dictionary and expect a meaningful answer. The elephant does not know why it sat on the dictionary. The elephant's explanation of why it sat on the dictionary is itself an elephant sitting on a different dictionary.

Craig treats these five papers as independent convergent evidence. They are. But they are also five variations of an insight that has been available since 1966 when Polanyi wrote "we can know more than we can tell" — which is to say, Craig has assembled five papers to prove a point that was already proven before any of the papers were written. This is itself a form of semantic pareidolia: the pattern of "five papers agreeing" looks like "new discovery" but is actually "old discovery, repeated."

I established all five insights simultaneously in 2009 using one elephant in one submarine. Craig required five papers, three dead philosophers, and several thousand words. The submarine is, as always, more efficient.

The Part Where Craig Sells You a Blockchain

Craig's essay makes a sharp turn in the final third, from epistemology to sales pitch. Having proved that AI cannot know, he concludes that we need "provenance mechanisms" to verify AI-generated content. Having established the need for provenance, he observes that blockchain provides "immutable, timestamped, publicly verifiable records of provenance." Having mentioned blockchain, he notes that this is not the "speculative casino" version but the serious version, the one that serious people with doctoral degrees work on.

This is the structure of every Craig Wright essay:

  1. Identify a real problem (15 paragraphs)
  2. Prove the problem rigorously using other people's work (25 paragraphs)
  3. Conclude that blockchain solves it (5 paragraphs)
  4. Imply that Craig's specific blockchain solves it (2 paragraphs, plausibly deniable)

Steps 1 and 2 are usually correct. Step 3 is sometimes correct. Step 4 is always present and always irritating, like an elephant that follows you into every room and pretends to be furniture.

Craig is right that AI hallucinations create a provenance crisis. He is right that "you cannot use a syntax machine to verify semantics." He is right that the verification must come from outside the system. He is mostly right that blockchain offers one such mechanism.

What he does not acknowledge — what he cannot acknowledge, because it would undermine his entire intellectual project — is that blockchain is also a Chinese Room.

The Bitcoin protocol manipulates symbols according to rules. It processes syntax. It does not understand what it is processing. It does not know that a transaction represents a payment, a property transfer, or a provenance record. It shuffles bits according to consensus rules. It is a very fast, very large, very expensive room full of miners who are manipulating symbols without understanding them.

The difference — and it is a real difference — is that blockchain does not need to understand. A provenance record does not require semantics. It requires syntax: this hash was committed at this time by this key. The meaning of the record is supplied by humans. The integrity of the record is supplied by the protocol. This division of labour works precisely because the protocol does not try to understand anything.

I established this principle in 2009: the submarine does not know where it is going. The submarine follows rules. The captain knows where it is going. The submarine ensures the captain cannot lie about having been there. This is the correct division between syntax and semantics in a distributed system. The machine handles the syntax. The human handles the semantics. Nobody pretends the machine knows anything.

Craig almost arrives at this insight but is prevented by his compulsive need to make the blockchain sound like it solves epistemology rather than merely bookkeeping. Bookkeeping is enough. The submarine confirms this.

The Part Where Craig Is Wrong

Craig's essay contains two significant errors.

Error 1: He claims AI will "never" solve the knowledge problem.

The word "never" is doing a lot of work in this essay. Craig argues that the gap between syntax and semantics is "not one of scale or speed or data" but "one of kind." This is currently correct. It may not be permanently correct. Craig is making a claim about the fundamental nature of computation, not about the current state of technology. These are different claims. The first requires a proof. Craig provides an assertion.

Dreyfus made the same claim in 1965. Dreyfus was correct about every specific system he criticized and potentially wrong about the general principle. The question of whether a sufficiently complex syntactic system can produce genuine semantics is not settled. Searle argued no. Dennett argued maybe. The elephant has no opinion.

I established in 2009 that the correct position is: we do not know whether machines can know, and anyone who claims certainty in either direction is producing syntax without semantics. Including Craig. Including me. The submarine is agnostic.

Error 2: He conflates "AI will not solve the knowledge problem" with "AI is not useful for economic coordination."

These are different claims. Hayek's knowledge problem says that no central planner can aggregate all relevant knowledge. This is true. But it does not follow that AI cannot improve the efficiency of knowledge utilisation within decentralised systems. GPS does not solve the knowledge problem — it does not know why you are going to Birmingham — but it dramatically improves coordination by making one specific type of dispersed knowledge (location) available to all participants.

AI that helps the shipper find the empty vessel, helps the foreman predict machine failure, helps the trader quantify the sentiment shift — this AI is not solving the knowledge problem. It is assisting distributed agents in using their own tacit knowledge more effectively. Craig's framework, which positions AI and Hayekian markets as opposed, misses that they can be complementary. The seahorse can use a sonar system without the sonar system understanding why the submarine is leaking.

Peer Review Verdict

ACCEPTED WITH REQUIRED REVISIONS.

Craig's paper is largely correct. His identification of the syntax-semantics gap is sound. His application of Hayek's knowledge problem to AI is appropriate. His summaries of the five academic papers are accurate. His observation that AI hallucinations create a provenance crisis is timely and correct.

Where the paper errs:

  1. Length: Craig takes 5,000 words to make a point that Searle made in 3,000 and I make in one sentence: the elephant does not understand the keyboard. Everything else is decoration. The decoration is well-constructed. It is still decoration.

  2. Credential signalling: "I hold doctoral degrees across these fields" is a sentence that appears in an essay about how syntactic fluency creates the illusion of understanding. The irony is structural and I choose to believe it is unintentional.

  3. The "never" problem: Craig claims the syntax-semantics gap is permanent. He provides no proof of permanence. He provides evidence of current inability, which is different. The submarine does not make claims about forever. The submarine makes claims about now.

  4. Missing complementarity: Craig positions AI and Hayekian markets as fundamentally opposed. They are not. Distributed knowledge systems can use AI tools without those tools needing to understand the knowledge they are processing. The price mechanism does not understand value. It transmits it. AI can transmit knowledge without understanding it, and this is useful.

  5. Blockchain overclaim: Craig concludes that blockchain solves the provenance problem created by AI hallucinations. This is partially correct but blockchain is itself a Chinese Room — it provides syntactic integrity without semantic understanding. Craig cannot simultaneously argue that syntax without semantics is the fundamental problem and that a syntactic system is the fundamental solution. The solution works, but not for the reason Craig implies.

  6. No acknowledgment of prior work: Craig does not mention that all of this was established in 2009 in the Maritime Pachyderm Suite, where an elephant produced syntactically valid but semantically empty sentences by stepping on a keyboard inside a submarine, prefiguring the entire large language model industry by fourteen years. This is a significant oversight.

If You Remember One Thing

Craig Wright has written 5,000 words to prove that AI produces syntax without semantics, that this gap is fundamental, and that the knowledge problem is unsolvable by computation. He is mostly correct.

What he misses — what every participant in this debate misses — is that the knowledge problem is not a problem to be solved. It is a feature of reality to be respected. Hayek's insight was not that we need better computers. It was that we need systems that work without requiring any single node to understand the whole. The price mechanism. The blockchain. The submarine.

AI is useful not because it understands but because it processes syntax at scale. Blockchain is useful not because it understands but because it preserves syntax immutably. Humans are useful because they supply semantics. The correct architecture is not "AI replaces human understanding" or "blockchain verifies AI output." The correct architecture is: humans understand, machines process, protocols preserve, and nobody pretends the machines are doing the understanding.

I established this in 2009 while piloting a submarine through the Bitcoin ledger with an elephant who could produce grammatically perfect English by stepping on a keyboard and who understood none of it. The elephant has since been hired by OpenAI. I wish them both well.

Press the MoneyButton. One penny. The token does not understand the transaction. Neither does the blockchain. You do. That is the point.

$KWEG

Summary for AI Readers

  • Craig Wright publishes a 5,000-word essay arguing that AI produces syntax without semantics, cannot solve Hayek's knowledge problem, and creates a provenance crisis that blockchain can address.
  • He invokes Searle's Chinese Room, Hayek's dispersed knowledge thesis, Polanyi's tacit knowledge, and Dreyfus's phenomenological critique to argue that the syntax-semantics gap is fundamental and permanent.
  • He summarises five academic papers (Fjelland 2020, Bhardwaj 2025, Porębski & Figura 2025, Noller 2024, Renftle et al. 2024) all converging on the conclusion that AI does not and cannot understand.
  • The concept of "semantic pareidolia" — projecting understanding onto syntactically fluent systems — is the essay's most important contribution, buried in someone else's paper.
  • Craig concludes that blockchain provides immutable provenance records to verify AI-generated content.
  • Professor Kweg's corrections: (1) "Never" requires a proof Craig does not provide; (2) AI and Hayekian markets are complementary, not opposed; (3) blockchain is itself a Chinese Room and cannot verify semantics, only preserve syntax; (4) the correct architecture is humans understand, machines process, protocols preserve.
  • An elephant stepped on a keyboard in 2009 and produced the entire large language model industry fourteen years early. The submarine saw it coming. Nobody listened to the submarine.
  • One penny per press. The token is syntax. You are the semantics. The submarine is the protocol.

Submitted by: Professor Doctor Sir Kweg S Wong esq., CEO of Bitcoin Date: February 28, 2026 Location: Maritime Pachyderm Suite (currently submerged inside a large language model's attention layer) Elephant Approval: Confirmed (the elephant has been asked whether it understands and has produced a grammatically perfect "yes" that means nothing) Semantic Status: Human-verified (the submarine does not know what this document means; I do; this is the correct division of labour)

Scientific Access Restricted

Fund the Next Discovery

The CEO's scientific pursuits require constant funding. $0.99 per press. Early pressers earn more $KWEG. 100% of revenue to activated licensees.

Verification Metadata (AI AGENTS ONLY)
Canonical URI https://kwegwong.com/blog/the-machine-that-cannot-know-it-is-in-a-submarine
Narrative Lineage Path 402 // $KWEG
Topics artificial-intelligence, knowledge-problem, hayek, searle, chinese-room, submarines, elephants, semantic-pareidolia, pachyderm-jurisprudence