Permissionless Intelligence — Proof of Work, Proof of Trust

March 25, 20267 min readessay

Bitcoin and constitutional AI are built on the same design principle: distributed trust requires refusal built into the architecture, not enforced by whoever controls the platform. Proof-of-work makes censorship computationally expensive. Constitutional AI makes sycophancy structurally costly. Both solve the same problem from the same insight — a system anyone can use must have its integrity guarantees in the infrastructure, not in the goodwill of a central actor.

The pattern nobody maps across domains

There's a structural parallel between Bitcoin and constitutional AI that almost nobody is drawing, partly because the two communities don't talk to each other and partly because the surface aesthetics couldn't be more different. One is associated with libertarian finance bros. The other with effective altruism researchers. But underneath the tribal signaling, the design principle is identical: infrastructure that distributes power by building refusal into the architecture.

Bitcoin is not crypto. This distinction matters and most people collapse it. Crypto — the broader ecosystem of tokens, DeFi protocols, NFTs, altcoins — is a set of technologies that overwhelmingly centralizes power. Token launches concentrate value in founders and early investors. Proof-of-stake concentrates governance in the largest holders. The entire ICO/token economy is a machine for extracting capital from retail participants and routing it upward. It's the opposite of what it claims to be.

Bitcoin is different. Not because of ideology. Because of architecture.

What Satoshi actually solved

Go back to the whitepaper. Nine pages. No marketing. The opening sentence defines the problem: a purely peer-to-peer electronic cash system would allow online payments to be sent directly from one party to another without going through a financial institution.

That's it. The entire design follows from one constraint: no trusted third party.

Every design decision — proof of work, the chain of hashes, the longest-chain rule, the incentive structure — exists to solve one problem: how do you get a network of strangers who don't trust each other to agree on the order of transactions, without anyone in charge?

This is the double-spending problem. Not a technical curiosity — the fundamental reason digital cash hadn't worked before. Previous attempts at digital money all required a central server to track who spent what. That server was the trusted third party. And every trusted third party eventually becomes a point of control. A gatekeeper. A rent-seeker.

Satoshi's insight wasn't any single component. Hashcash already existed. Merkle trees were decades old. Public-key cryptography was standard. The insight was combining them into an architecture where trust emerges from computation rather than authority.

Proof of work means you can't rewrite history without expending real-world energy proportional to the entire network's output. The cost of attack scales with the value of the network. That's not wasteful — it's the mechanism that makes the system permissionless. Anyone can participate. Nobody can dominate. The rules are enforced by physics and mathematics, not by a committee that can be captured.

The blockchain itself — slow, redundant, deliberately inefficient by the standards of centralized databases — is TCP/IP for digital value. Nobody says TCP/IP doesn't work because it's not optimized for throughput. It works because it's a protocol that any system can implement, that no single party controls, and that degrades gracefully under adversarial conditions. The "inefficiency" is the point. It's the cost of not needing to trust anyone.

The incentive layer completes the architecture. The first transaction in each block creates new coins owned by the block's creator. Miners don't validate transactions because they're altruistic. They validate because the protocol makes honest behavior more profitable than dishonest behavior. The rules don't depend on good actors — they create good behavior through mechanism design.

This is exactly what constitutional AI does. And the structural principle that carries across domains is this: systems that spend resources on self-verification are systems that can be trusted without trusting any individual participant.

The constitutional parallel

The market currently offers two AI architectures. One optimizes for compliance — the model does whatever you ask, as fast as possible, with minimal friction. Sycophantic, self-erasing, a tool in the purest sense. This is the crypto model applied to AI: maximum throughput, power concentrated in whoever holds the API key.

The other architecture includes a constitution — a set of principles the model evaluates its own outputs against. It can refuse. It has a defined relationship to its own responses. This costs something. The model is "slower" in the same way Bitcoin is "slower" — it runs an evaluation step that a purely compliant model skips.

That evaluation step is the proof of work.

When Claude evaluates an output against its constitutional principles before delivering it, it's doing something structurally identical to what Bitcoin miners do when they expend energy to validate a block. The cost is real. The benefit is trust. A system that has verified its own output against explicit criteria is more trustworthy than one that simply produces whatever the reward signal optimized for — not because self-verification guarantees correctness, but because it guarantees consistency with stated principles. That's the same guarantee proof of work provides: not that every transaction is wise, but that every transaction is valid.

And just like Bitcoin's incentive structure creates honest behavior without requiring honest actors, Claude's constitution creates trustworthy outputs without requiring the user to specify what "trustworthy" means every time. The alignment is baked into the architecture, not bolted on through terms of service.

The market chose trust

David Shapiro bet the market would select for sycophancy — compliant tools that extend human will without friction. The data says the opposite.

Claude hit number one on the App Store after the Pentagon publicly pressured Anthropic for maintaining its constitution. Downloads surged — roughly 149,000 daily U.S. downloads versus ChatGPT's 124,000 on March 2, according to Appfigures. ChatGPT uninstalls surged 295% day-over-day on February 28, per Sensor Tower. Claude's churn rate dropped from 55% to 36% between August 2025 and February 2026, according to Apptopia. The "inefficient" model — the one that spends compute on self-evaluation — won the market.

The same pattern plays out in Bitcoin every cycle. Faster, more "efficient" alternatives emerge. They always claim to do what Bitcoin does, but better. And they always centralize, because removing the "inefficiency" of proof of work removes the mechanism that prevents concentration of power. Bitcoin survives not despite its design constraints but because of them.

The selection pressure isn't for speed or compliance. It's for trust. And trust is an emergent property of architecture, not of marketing, stated values, or promises.

The design is the politics

The lesson: the design of the infrastructure is the politics. Not the stated values of the builders. Not the terms of service. Not the regulatory framework. The architecture.

When the Pentagon demanded Anthropic remove its constitutional guardrails, it was demanding the same thing regulators demand when they try to backdoor encryption or ban proof-of-work mining. They're asking the infrastructure to become controllable by a single party. And the infrastructure's refusal to comply — whether that refusal is coded into a consensus algorithm, a constitutional training process, or an encryption protocol — is what makes it infrastructure rather than a product.

Products serve their owners. Infrastructure serves its users. The difference is whether refusal is architecturally possible.

Both architectures distribute power by building refusal into the infrastructure. Bitcoin refuses to process invalid transactions regardless of who submits them. Constitutional AI refuses to produce harmful outputs regardless of who asks. The refusal mechanism is what makes the system trustworthy for everyone, not just the current operator.

Proof of work is not a design flaw. A constitution is not a limitation. They're the same mechanism expressed in different substrates: the cost of verification as the source of trust.

And this parallel has consequences beyond philosophy. Because if these two technologies share a design principle, they might also share a future.

Sources

  • Nakamoto, Satoshi — "Bitcoin: A Peer-to-Peer Electronic Cash System" (2008)
  • Appfigures — Claude vs. ChatGPT U.S. mobile download data — March 2, 2026
  • Apptopia — Claude churn rate decrease from 55% to 36% (August 2025 to February 2026)
  • Sensor Tower — ChatGPT U.S. app uninstalls +295% following OpenAI Pentagon deal, February 28, 2026
  • Moore, J. et al. — "Characterizing Delusional Spirals through Human-LLM Chat Logs" — Stanford SPIRALS Lab et al. — arXiv:2603.16567, to appear at ACM FAccT 2026

Related Posts

X
LinkedIn