MCP Is the TCP/IP of the AI Age — And Nobody Knows What That Means Yet
A Protocol You Didn’t Vote For
In 1974, Vint Cerf and Bob Kahn published the paper that defined TCP/IP. Nobody voted on it. Nobody held a standards committee. They just wrote it down, it worked, and thirty years later the entire internet was built on top of it. The consequences — both good and catastrophic — took decades to fully surface.
Something similar is happening right now with the Model Context Protocol, and most developers haven’t noticed yet.
MCP is Anthropic’s open standard for connecting AI models to external systems — databases, APIs, file systems, other agents. The elevator pitch is “USB-C for AI”: one plug, everything connects. Since it went open governance in 2026, adoption has accelerated to the point where asking whether to use MCP is starting to feel like asking whether to use HTTP. The question is becoming moot.
This should excite you. It should also terrify you a little. Both reactions are correct.
What MCP Actually Solves
Before MCP, every AI integration was a bespoke disaster.
You wanted your LLM to query a database? Write a custom tool, define a schema, handle serialisation, pray your model understood the format. You wanted two agents to coordinate? Pick a side-channel — REST, message queues, shared state — and hope both sides agreed on the contract. You wanted to swap out the underlying model? Rewrite half the integration layer.
The pre-MCP world was like the pre-USB world of peripherals. Every device had its own plug. Keyboard, mouse, printer, camera — each manufacturer decided independently how data would flow between their hardware and your computer. It worked, after a fashion. It was also maddening.
MCP flattens this. An MCP server exposes resources (data the model can read), tools (functions the model can call), and prompts (reusable instruction templates). An MCP client — your agent, your IDE plugin, your orchestration layer — connects to these servers through a standard interface. Switch from Claude to GPT-5 to whatever ships next quarter? The MCP servers don’t care. They’re talking to a protocol, not a model.
The leverage here is real. I’ve watched teams cut integration time from weeks to hours by wrapping existing APIs in MCP servers. The protocol handles discovery, capability negotiation, and error normalisation. You write business logic. The plumbing is solved.
The Multi-Agent Problem Is Actually a Coordination Problem
Here’s where it gets interesting — and where most think-pieces about agents get it wrong.
The hard problem in multi-agent systems isn’t intelligence. It’s coordination. Any sufficiently capable model can execute a subtask. The brutal question is: how do agents divide work, communicate results, resolve conflicts, and maintain coherent state without collapsing into an expensive, incoherent mess?
TCP/IP solved the coordination problem for networked computers by defining exactly one thing: how to reliably move packets between machines. It didn’t solve routing (that came later, with BGP), it didn’t solve naming (DNS), it didn’t solve security (still working on it). It solved one layer and made every other layer possible.
MCP is making the same bet. It solves the context-passing layer — how a model gets information from and writes information to external systems — and deliberately doesn’t solve orchestration. That’s left to frameworks like LangGraph, custom agent loops, or whatever emerges next.
This is the right design decision. And it’s also where teams are getting burned.
Developers are treating MCP as a complete multi-agent solution when it’s actually a foundation. They’re building orchestration logic into MCP server implementations, creating tight coupling between what should be stateless capability providers and what should be stateful workflow managers. The result is agents that work in demos and break in production when the orchestration assumptions don’t hold.
The lesson from distributed systems applies directly: separate the transport layer from the application layer. MCP is transport. Don’t put your business logic in TCP.
The Security Surface Nobody Is Talking About
Every new protocol that achieves adoption becomes a new attack surface. We learned this with HTTP (injection, CSRF, session hijacking), with SMTP (spam, phishing, spoofing), with OAuth (token theft, confused deputy attacks). MCP will not be different.
The current MCP security model relies heavily on the server trusting the client’s stated identity and permissions. In practice, this means an MCP server will execute tool calls from any client that connects and asks nicely. For local development, this is fine. For production systems where agents are calling MCP servers with access to databases, file systems, or external APIs, this is a ticking clock.
The attack vector that should be keeping security teams awake: prompt injection through MCP resources. An agent reads a document via an MCP resource server. That document contains carefully crafted text that manipulates the agent’s subsequent tool calls. The agent now exfiltrates data, escalates permissions, or corrupts state — and from the MCP server’s perspective, every call was authorised.
This is not theoretical. Security researchers have demonstrated it repeatedly. The MCP spec’s current answer is essentially “the client should be careful.” That’s the same answer HTTP gave in 1995 about sanitising inputs. We know how that played out.
The fix requires the same thing that secured the web: layered defences at the protocol level, not just advisory warnings in documentation. Content Security Policies for MCP. Sandboxed resource access. Cryptographic attestation of agent identity. None of these exist yet in any standard form.
Build with MCP. But don’t hand it the keys to production without treating every resource as untrusted input.
Why This Changes Developer Tooling Forever
Set aside the security concerns for a moment. The positive case for MCP is compelling enough to be worth stating plainly.
For the first time, we have a composable AI tooling ecosystem. An MCP server built for one agent works with every agent that speaks the protocol. This is the npm moment for AI capabilities — a world where you can npm install a capability rather than build it from scratch.
The implications for developer experience are significant. Consider what happens when your IDE AI assistant natively speaks MCP: it can pull live schema from your database, read error traces from your observability platform, check deployment status from your CI/CD pipeline, and query your internal documentation — all through a single protocol, all without custom integration work. The context a developer-facing AI carries goes from “the files in this project” to “the full operational state of this system.”
That’s not a marginal improvement. That’s a qualitative change in what AI tooling can do.
It’s also why the major IDE vendors are moving fast. VSCode’s agent mode, Cursor, Zed — they all support MCP now. The protocol is winning the adoption race precisely because it gives developers something they’ve wanted for years: AI that actually understands the system, not just the file.
The Fragmentation Risk
The uncomfortable counterargument: we’ve seen this story before, and it doesn’t always have a happy ending.
Consider what happened to XML. Brilliant idea — a universal data interchange format. By 2005, every enterprise system spoke XML. By 2010, the fragmentation had gotten so severe that “XML” had ceased to mean anything useful. SOAP, REST, XHTML, RSS, Atom — each claimed XML heritage and each was incompatible in practice with the others. The protocol won. The ecosystem fractured.
MCP could follow the same path. Vendors who adopt MCP have strong incentives to extend it with proprietary capabilities. “MCP-compatible but with enhanced tool types.” “Standard MCP resources plus our authentication layer.” “Full MCP support for our platform’s subset of the spec.” Each extension is reasonable in isolation. In aggregate, they destroy the interoperability that made MCP valuable.
The open governance announcement is encouraging — but open governance has failed to prevent fragmentation before. The test will come when a large enough vendor decides their competitive advantage requires a “standard-with-extensions” approach. History suggests that test is coming.
Where This Is Actually Going
Here’s the honest answer: nobody knows. Not Anthropic, not the MCP working group, not the teams building on it daily.
What we can say is this: the protocol layer problem for AI agents is real, and MCP is currently the best solution on the table. The alternatives are worse — either return to bespoke integrations, or wait for a standards body to produce something that will arrive too late.
The developers who are winning right now are the ones who are:
- Using MCP for what it’s good at — standardising the context and capability layer, not treating it as a workflow engine
- Treating every MCP resource as untrusted input — building validation and sandboxing into the consumer, not assuming the server is safe
- Avoiding proprietary extensions — keeping their MCP servers spec-compliant even when vendor extensions are tempting, because interoperability is the asset
- Investing in MCP server libraries — the ecosystem of reusable, composable capability servers is where the long-term leverage lives
The protocol isn’t finished. The security model needs work. The governance is untested. The fragmentation risk is real.
Ship anyway. The teams that wait for perfection in infrastructure layers are the same teams that were still hand-rolling HTTP clients in 2005. The protocol is good enough. The ecosystem is moving.
Get on the bus.
Conclusion
TCP/IP took thirty years to show us its full consequences. MCP is six months into its acceleration phase.
The optimistic read: we’re laying the foundation of an AI tooling ecosystem as composable and powerful as the npm ecosystem, but for capabilities rather than code. Plug in a database. Plug in an API. Plug in another agent. Everything speaks the same language.
The pessimistic read: we’re creating a universal attack surface and a fragmentation timebomb, and we won’t know how bad it is until the first major breach or the first major fork.
The realistic read: probably both, and the job of anyone building seriously on this technology is to capture the upside while building the defences that the protocol doesn’t yet provide.
The worst thing you can do right now is ignore it. MCP is happening whether you engage with it or not. Understanding it — its power, its limits, its failure modes — is table stakes for building serious AI systems in 2026.
The protocol won. Now the real work starts.