The Model Context Protocol (MCP) is an open JSON-RPC standard created by Anthropic that lets AI models connect to external data sources and tools through a shared interface. Instead of writing custom integration code for every service, MCP defines three primitives — tools, resources, and prompts — that any client and server can implement. It's the USB-C of AI integrations: one protocol, any connection.
I use MCP servers every day in Claude Code. Filesystem access, Notion, Figma, Playwright for browser automation, an Obsidian vault connector for my notes. They all just work — I add a config entry, restart the session, and the model can see new data sources. Before MCP existed, every one of those connections would have been a custom tool I wrote and maintained myself.
That difference sounds incremental. It isn't.
The wiring problem
Here's what building AI integrations looked like before MCP: you wanted Claude to read your database, so you wrote a tool function that accepted a query, ran it against Postgres, formatted the results, and returned them. Then you wanted it to read your docs, so you wrote another tool function. Then your Slack messages. Then your calendar. Each one was a bespoke connector — its own input schema, its own error handling, its own way of presenting data to the model.
This worked. It was also a mess. Every integration was a snowflake. If you wanted to switch from Claude to another model, you rewrote the tool interfaces. If someone else had already solved the same problem — say, connecting an LLM to a Postgres database — you couldn't reuse their work because there was no shared interface. The integration logic was coupled to the specific client and the specific data source.
The USB analogy that people keep reaching for is actually apt here. Before USB, every peripheral had its own connector, its own driver protocol, its own way of negotiating with the host. Printers, mice, keyboards, scanners — all different. USB didn't make peripherals better. It made the connection standard. That turned out to be the thing that mattered.
What MCP actually is
MCP — Model Context Protocol — is a JSON-RPC protocol between clients (applications hosting LLMs) and servers (anything providing data or capabilities). Three primitives: tools the model can call, resources it can read, and prompts that structure how it approaches tasks. Adding one to Claude Code is a few lines of JSON config — point it at the server command, set environment variables, done. No glue code, no custom wrappers.
The real shift
The interesting thing about MCP isn't any individual server. It's what happens when the connection layer becomes standard.
Before MCP, if I built a tool that let Claude query my project's database, that tool lived in my codebase and died with my project. With MCP, someone builds a Postgres MCP server once, publishes it, and every MCP-compatible client can use it. The Notion server I use wasn't built by Anthropic or by me — it was built by someone who needed the same thing I needed, and because the protocol is shared, their work just plugs in.
This is the network effect that standards create. Each new server makes every client more capable. Each new client makes building servers more worthwhile. The value isn't in any single connection — it's in the combinatorial explosion of possible connections.
I've noticed this concretely. Six months ago, my Claude Code setup had maybe two custom tools I'd written. Now I have eight MCP servers connected, most of which I didn't build. My agent's capabilities expanded dramatically and most of that expansion was other people's work. That doesn't happen without a shared protocol.
What MCP isn't
Here's where I want to be honest, because the hype around MCP sometimes outpaces the reality.
MCP is not an API replacement. If you have a well-designed REST API and you just need your application to call it, you don't need MCP. MCP solves a different problem — it standardizes how LLMs discover and interact with external capabilities. The distinction matters. An API is designed for programmatic access by applications. An MCP server is designed to be understood and used by a model.
Server quality varies wildly. Some MCP servers on the registry are solid, well-maintained, and do exactly what they claim. Others are weekend projects that handle the happy path and fall over on anything else. There's no certification, no quality bar, no guarantee that a random server you find actually works reliably. The ecosystem is still young enough that "buyer beware" applies.
Discovery is also a problem. Finding the right MCP server for what you need is harder than it should be. There's no single registry that's comprehensive. Smithery has a collection, Anthropic has a list, GitHub has scattered repos. If you need something specific — say, an MCP server for Jira that handles custom fields properly — you might spend more time searching than you would have spent writing the tool yourself.
And the protocol is still evolving. The spec has changed in ways that broke existing servers — the deprecation of HTTP+SSE in favor of Streamable HTTP in the March 2025 spec was the right call but forced rewrites across the ecosystem. Authentication landed with OAuth 2.1 in the June 2025 spec, which is progress, but real-world implementation remains complex enough that many server builders still punt on it. These are growing pains, not fatal flaws, but they're real.
The question underneath
There's a deeper question that MCP surfaces, even if it doesn't fully answer it: what happens when AI integrations become genuinely interoperable?
Right now, we're in the "USB 1.0" phase. The standard exists, early adopters are building to it, and the ecosystem is forming. But the implications go further than "it's easier to connect Claude to your database."
It also changes how we think about AI agents. An agent's capabilities are no longer limited to what its developer built — or to tools you've hand-crafted descriptions for. They're the sum of every MCP server it can access. My Claude Code instance isn't just "Claude plus the tools I wrote" — it's "Claude plus Notion plus Figma plus Playwright plus my Obsidian vault plus whatever I add next." The agent becomes a hub connecting to an open-ended set of capabilities, and the protocol is what makes that composition possible.
Whether MCP becomes the lasting standard or gets superseded by something better — honestly, I don't know. The Universal Commerce Protocol is already applying similar thinking to agent-to-merchant transactions, which suggests the protocol layer is consolidating faster than expected. Standards sometimes win on technical merit, sometimes on adoption momentum, and sometimes don't win at all.
But the competitive dynamics are worth watching closely. If MCP succeeds as an open standard, the implications are structural. LLM providers lose one of their most powerful lock-in mechanisms: proprietary integrations. When any client can talk to any server through a shared protocol, the competitive surface shifts to model quality, speed, and cost — the actual capabilities. The integration layer becomes a commons rather than a moat.
This is uncomfortable for providers who were counting on ecosystem lock-in. If switching from Claude to GPT or Gemini doesn't mean rewiring every integration, the switching cost drops dramatically. The model becomes more commodity-like. That's good for users and builders. It's an existential question for providers who can't compete on capability alone.
The flip side: MCP also creates defensibility for builders. If your MCP servers encode domain expertise — the specific tool descriptions, the particular data access patterns, the carefully crafted boundaries of what each tool does and doesn't do — that expertise is portable across models but specific to your domain. You own the integration layer even though the protocol is shared. The protocol enables provider independence; the implementation encodes your competitive advantage.
The question worth watching isn't just "does MCP create an ecosystem explosion?" It's "who captures the value?" If the value accrues to server builders — the people encoding domain knowledge into standard interfaces — that's a different world than if it accrues to the platform that hosts the most servers. The protocol design is neutral on this question. The governance decisions that haven't been made yet will determine the answer.
I'm building on MCP in the meantime, because the alternative is writing custom connectors for everything, and I've done enough of that to know it's not where I want to spend my time.
Sources
- MCP Specification — the current Model Context Protocol spec
- MCP Transports — Streamable HTTP — transport spec that deprecated HTTP+SSE
- MCP Authorization — OAuth 2.1 — authentication spec
- Smithery — MCP Server Registry — community registry for discovering MCP servers