I use MCP servers every day in Claude Code. Filesystem access, Notion, Figma, Playwright for browser automation, an Obsidian vault connector for my notes. They all just work — I add a config entry, restart the session, and the model can see new data sources. Before MCP existed, every one of those connections would have been a custom tool I wrote and maintained myself.
That difference sounds incremental. It isn't.
The wiring problem
Here's what building AI integrations looked like before MCP: you wanted Claude to read your database, so you wrote a tool function that accepted a query, ran it against Postgres, formatted the results, and returned them. Then you wanted it to read your docs, so you wrote another tool function. Then your Slack messages. Then your calendar. Each one was a bespoke connector — its own input schema, its own error handling, its own way of presenting data to the model.
This worked. It was also a mess. Every integration was a snowflake. If you wanted to switch from Claude to another model, you rewrote the tool interfaces. If someone else had already solved the same problem — say, connecting an LLM to a Postgres database — you couldn't reuse their work because there was no shared interface. The integration logic was coupled to the specific client and the specific data source.
The USB analogy that people keep reaching for is actually apt here. Before USB, every peripheral had its own connector, its own driver protocol, its own way of negotiating with the host. Printers, mice, keyboards, scanners — all different. USB didn't make peripherals better. It made the connection standard. That turned out to be the thing that mattered.
What MCP actually is
MCP — Model Context Protocol — is a client-server protocol using JSON-RPC. An MCP client is any application that hosts an LLM. An MCP server is anything that provides data or capabilities. The protocol defines how they talk to each other.
There are three primitives. Tools are functions the model can call — search a database, create a file, send a message. Resources are data the model can read — a schema, a document index, a configuration file. Prompts are reusable templates that structure how the model approaches a task. Three primitives. That's it.
The connection itself is almost comically simple. In Claude Code, adding an MCP server to your project looks like this:
{ "mcpServers": { "obsidian-vault": { "command": "npx", "args": ["-y", "@smithery/cli@latest", "run", "obsidian-vault-mcp"], "env": { "VAULT_PATH": "/path/to/your/vault" } } } }
That's a real config from my setup. Point it at the server command, give it whatever environment variables it needs, done. The model now has access to every tool and resource that server exposes. No glue code, no custom wrappers, no integration middleware.
The real shift
The interesting thing about MCP isn't any individual server. It's what happens when the connection layer becomes standard.
Before MCP, if I built a tool that let Claude query my project's database, that tool lived in my codebase and died with my project. With MCP, someone builds a Postgres MCP server once, publishes it, and every MCP-compatible client can use it. The Notion server I use wasn't built by Anthropic or by me — it was built by someone who needed the same thing I needed, and because the protocol is shared, their work just plugs in.
This is the network effect that standards create. Each new server makes every client more capable. Each new client makes building servers more worthwhile. The value isn't in any single connection — it's in the combinatorial explosion of possible connections.
I've noticed this concretely. Six months ago, my Claude Code setup had maybe two custom tools I'd written. Now I have eight MCP servers connected, most of which I didn't build. My agent's capabilities expanded dramatically and most of that expansion was other people's work. That doesn't happen without a shared protocol.
What MCP isn't
Here's where I want to be honest, because the hype around MCP sometimes outpaces the reality.
MCP is not an API replacement. If you have a well-designed REST API and you just need your application to call it, you don't need MCP. MCP solves a different problem — it standardizes how LLMs discover and interact with external capabilities. The distinction matters. An API is designed for programmatic access by applications. An MCP server is designed to be understood and used by a model.
Server quality varies wildly. Some MCP servers on the registry are solid, well-maintained, and do exactly what they claim. Others are weekend projects that handle the happy path and fall over on anything else. There's no certification, no quality bar, no guarantee that a random server you find actually works reliably. The ecosystem is still young enough that "buyer beware" applies.
Discovery is also a problem. Finding the right MCP server for what you need is harder than it should be. There's no single registry that's comprehensive. Smithery has a collection, Anthropic has a list, GitHub has scattered repos. If you need something specific — say, an MCP server for Jira that handles custom fields properly — you might spend more time searching than you would have spent writing the tool yourself.
And the protocol is still evolving. The spec has changed in ways that broke existing servers. Transport mechanisms are still being debated — stdio works for local servers but doesn't scale to remote ones the way HTTP would. Authentication for remote MCP servers is genuinely unsolved in the current spec. These are growing pains, not fatal flaws, but they're real.
The question underneath
There's a deeper question that MCP surfaces, even if it doesn't fully answer it: what happens when AI integrations become genuinely interoperable?
Right now, we're in the "USB 1.0" phase. The standard exists, early adopters are building to it, and the ecosystem is forming. But the implications go further than "it's easier to connect Claude to your database."
If any MCP client can talk to any MCP server, then the competitive surface shifts. LLM providers stop competing on who has the best built-in integrations and start competing on reasoning quality, speed, cost — the actual model capabilities. The integration layer becomes a shared commons rather than a proprietary moat. That's good for users. It might be uncomfortable for providers who were counting on lock-in through integrations.
It also changes how we think about AI agents. An agent's capabilities are no longer limited to what its developer built. They're the sum of every MCP server it can access. My Claude Code instance isn't just "Claude plus the tools I wrote" — it's "Claude plus Notion plus Figma plus Playwright plus my Obsidian vault plus whatever I add next." The agent becomes a hub connecting to an open-ended set of capabilities, and the protocol is what makes that composition possible.
Whether MCP becomes the lasting standard or gets superseded by something better — honestly, I don't know. Standards sometimes win on technical merit and sometimes win on adoption momentum and sometimes don't win at all. What I do know is that the problem MCP addresses is real. The custom-wiring approach to AI integrations doesn't scale, and whoever solves the interoperability problem — whether it's MCP or something else — changes how the entire ecosystem develops.
The question worth watching: does a shared protocol for AI integrations create the same kind of ecosystem explosion that USB, HTTP, and TCP/IP created in their domains? Or does the pace of change in AI move too fast for any protocol to stabilize?
I don't have the answer. But I'm building on MCP in the meantime, because the alternative is writing custom connectors for everything, and I've done enough of that to know it's not where I want to spend my time.