Every agentic organization will need an operating manual. Not a handbook for humans — a machine-readable governance layer that humans and agents both consume at runtime. I call it the agentic operating manual: the document that tells every actor in the system — human or AI — how decisions get made, how context flows, and what the boundaries are.
Every team I've worked with has the same ratio: maybe a quarter of the week goes to work that actually generates value. The rest is coordination. Syncs about syncs. Status updates for people who needed status updates because they weren't in the last sync. Context transfers between team members who each hold a fragment of the full picture, and no one holds all of it.
When teams add AI to the workflow, the execution gets faster. The coordination overhead stays exactly the same. Claude can draft a landing page in minutes. Getting alignment on what the landing page should say still takes three meetings and a Slack thread that devolves into a debate about button colors.
That's the question worth asking. Not "how do we use AI better?" but "why does the entire org model assume humans are the only coordination layer?"
The Coordination Tax Is Structural
A Harvard Business School field experiment studied 776 professionals at Procter & Gamble working on real product innovation challenges. AI-augmented teams were three times more likely to produce ideas ranked in the top 10% by independent experts. The mechanism: AI broke down functional silos. R&D people produced more commercially viable ideas. Marketing people produced more technically grounded ideas. A single person with AI matched the output quality of a two-person team without it.
The researchers found that AI was absorbing the coordination cost that used to require a second human. The synthesis that previously required a meeting between marketing and engineering — the back-and-forth, the translation between domains, the alignment on what "technically feasible" actually means — was happening inside a single person's workflow.
This maps directly to what we're seeing with solo founders. Ben Sira hit $2.5 million ARR with zero employees at Pulsia. Mauromo bootstrapped Base44 to 300,000 users and roughly $3.5 million. These numbers are interesting, but they're not the story. The story is what these builders don't spend time on: no standups, no alignment meetings, no Jira tickets, no context transfers. One person with full context, making decisions at the speed of conviction.
The instinct is to dismiss this as a green-field phenomenon. Solo founders don't deal with legacy systems, integrations, other teams. Fair. But the P&G data says something different. It says the coordination overhead isn't a side effect of organizational complexity — it's a structural property of human-only coordination. AI doesn't just speed up the work. It absorbs the cost of combining perspectives that used to require putting two people in a room.
We've spent the last two years optimizing execution — faster code, faster content, faster analysis. Meanwhile, the coordination layer eats the gains. Cursor ships 20 parallel agents on cloud VMs. Anthropic deploys model updates every few weeks. The execution side is compressing toward zero marginal cost. But the org model is still built for a world where coordination requires humans talking to humans.
That's the bottleneck. And it's not a tooling problem.
What GitLab Actually Did
When people talk about GitLab's remote-first model, they usually focus on the "no office" part. That misses the architectural insight.
GitLab's handbook is roughly 2,000 pages. It documents everything: how decisions get made, how to expense a meal, how to disagree with your manager, how to communicate asynchronously, what "done" means for different types of work. It's publicly accessible. Anyone can read it.
The part most people miss: the handbook wasn't documentation about how GitLab works. It WAS how GitLab works. New employees didn't learn the culture from other people. They learned it from the handbook, because the handbook was the canonical source of truth. If the handbook said one thing and a manager said another, the handbook won.
This is the same pattern as SOUL.md in agent systems. In the agentic ecosystem, a SOUL.md is a governance document that tells an agent who it is, how it operates, and what its boundaries are. It's not a description of the agent's behavior. It's the operational layer the agent reads at runtime. The agent's behavior IS the document.
I've written SOUL.md files for game masters, for editorial systems, for code review agents. The pattern is always the same: you encode identity, values, decision boundaries, and behavioral constraints into a document that the agent consumes as its operating instructions. The agent doesn't interpret the document. It executes it.
GitLab did this for humans. The handbook was, in effect, a SOUL.md for an organization — machine-readable if you squint (structured, searchable, canonical), human-legible by design. It governed behavior by being the single reference point that everyone aligned to.
The gap nobody has filled: GitLab's handbook governs human-to-human coordination. Nobody has written the handbook for human-to-agent coordination. That's the missing document.
What an Agentic Operating Manual Actually Contains
A traditional handbook is read by humans. An agentic operating manual is read by humans AND agents. That distinction changes everything about the document's nature.
When an agent reads the operating manual at runtime, it knows: what decisions it can make autonomously, what requires human approval, how to escalate, what the org's priorities are this week, who owns what domain. The manual governs runtime behavior — the coordination substrate that replaces the meeting where you would have explained all of this to a new team member.
The pattern is already visible in personal knowledge management systems. A coordination model that defines agent roles, exclusive write paths, shared write paths with format contracts, and a stage-based ownership protocol. One agent handles content intelligence. Another handles vault orchestration. A third handles ad-hoc capture. Each agent reads the same governance document. Each knows its boundaries. None of them need to "sync" with each other because the document IS the sync.
Scale that up from a personal vault to an organization, and you have the agentic org operating manual.
The key architectural claim: this is a living document agents consume at runtime — versioned, structured, and machine-readable by design. Update the manual, and every agent in the system updates its behavior on the next run. Documentation describes a process. This one executes it.
The Four Layers
After building coordination systems for agents across different contexts — game engines, editorial pipelines, code review, business operations — the structure converges on four layers.
Layer 1: Decision Architecture. This is the load-bearing wall. Every decision in the org falls into one of four classes. Class A: agent decides and executes autonomously, human is notified after the fact. Class B: agent proposes, human confirms before execution. Class C: human decides, agent provides context and analysis. Class D: human only, no agent involvement.
This isn't theoretical. It's the concrete answer to "humans in the loop" — which has become one of the most hand-waved phrases in the industry. Every company says they have humans in the loop. Almost none of them can tell you which humans, for which decisions, with what governance. The decision architecture makes it specific: this type of decision is Class B, the agent proposes, the human confirms within two hours, and if they don't, it escalates.
Layer 2: Coordination Model. How context flows between humans and agents without synchronous overhead. What replaces meetings: agent-generated status summaries, decision logs with full context, and context briefs prepared before any human needs to engage with a topic. The async-first principle is about making the coordination layer persistent and queryable — not ephemeral and verbal.
The operating manual defines a handoff protocol: when work transfers between actors (human or agent), the outgoing actor packages context — what was done, what's remaining, what decisions are pending. The incoming actor confirms understanding. No assumptions, no "I think Sarah mentioned something about this in the last standup."
Layer 3: Conviction as a Workflow Input. This is the layer most companies will resist, and the one that matters most.
There's a pattern emerging from every high-performing AI-native builder I've studied: the distinguishing skill isn't taste alone. It's conviction — the willingness to form a hypothesis, commit to testing it, and act at speed. The solo founders who ship fast aren't operating on better information. They're operating with higher conviction and tighter feedback loops.
In the operating manual, conviction becomes operational: document the hypothesis, classify the decision (what class?), set a test, execute using agents to compress the cycle, evaluate the result. The "disagree and commit" pattern — common at Amazon, rare everywhere else — becomes a protocol, not a cultural value printed on a poster. If your recommendation gets overridden, log the override and commit. If the outcome proves you right, it feeds back into the decision architecture.
Layer 4: The Public Layer. This is the GitLab move. Publish the manual — or the substantial parts of it — as methodology.
This isn't content marketing. It's proof of work. The practitioners who read it either implement it themselves (your brand grows) or they realize it's harder than it looks and hire you to implement it (your consulting pipeline fills). The manual demonstrates the methodology by being the methodology. You don't need a pitch deck when the operating manual is public.
Why the First Manual Wins
The competitive argument is straightforward and it compounds.
An org with a working agentic operating manual doesn't just move faster. It can take on scope that traditional orgs structurally cannot. The $10 million market that wasn't viable because the engineering team costs $3 million a year? Viable now, because agent-augmented teams operate at a fraction of the coordination cost. The experiment with a 20% success rate that nobody would greenlight? Run five of them. The decision architecture tells you which class each experiment falls into and how fast it ships.
This is Jevons' paradox applied to organizational capability. When execution cost drops by an order of magnitude, consumption goes up, not down. When steel got cheap, we didn't build the same buildings for less money — we built skyscrapers. When computing got cheap, we didn't do the same calculations faster — we built the internet. The companies that understand this aren't optimizing their existing operations with AI. They're expanding into territory that was previously structurally impossible.
The talent argument seals it. The people with conviction, taste, and execution bandwidth — the ones every company is trying to hire — will gravitate toward orgs that give them agent-augmented autonomy. The operating manual is what makes that autonomy structured rather than chaotic. Without it, you get cowboys. With it, you get extraordinary people operating at something closer to their actual capacity, which is the whole point.
The Manual Is the First Artifact
If you're building an agentic org — or trying to transform a traditional one — the manual is the first artifact you need. Not the tools. Not the agents. Not the model. The governance layer that tells everyone, human and machine, how to coordinate.
GitLab wrote the handbook that defined remote-first. That document doesn't exist yet for human-agent organizations. It will. And the companies that write theirs first won't just have a head start — they'll have the coordination advantage that makes everything else possible.
The org model is the product now. Document yours. The infrastructure stack that executes it — the readiness ladder from data to agentic-first principles — is what makes documentation operational rather than decorative.
Sources
- Dell'Acqua, F. et al. — "The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise" — HBS Working Paper 25-043 — 2025 (776 P&G professionals, AI-augmented teams 3x more likely to reach top 10%)
- GitLab — GitLab Handbook — ~2,000 pages, publicly accessible
- Anthropic — "Measuring Agent Autonomy" — February 2026