Code, craft, and hard-won lessons.
Building with code & caffeine.

MCP Is What REST Was in 2008

In 2008, REST was the obvious answer to the SOAP/WSDL mess. Simple, stateless, resource-oriented. The concept was clean. The execution, across the industry, was a disaster. Teams slapped HTTP verbs on RPC calls, ignored caching semantics, mixed concerns between resources, and called it REST because the URL had nouns in it. Fifteen years later, we’re still cleaning up systems built that way.

MCP — the Model Context Protocol — is at exactly that inflection point right now. The concept is sound. The tooling is maturing fast. And most teams implementing it are making the same category of mistakes that doomed those early REST APIs.

What MCP Actually Is

MCP is a protocol for connecting LLMs to external context: databases, APIs, filesystems, internal tools. It standardizes how a model asks for data and how servers respond, using a typed tool interface. The model declares what tools it wants to call; the MCP server exposes them; the host mediates the exchange.

The insight that makes it useful is the same insight that made REST useful: a shared contract reduces coupling. If your LLM can speak MCP, it can talk to any MCP-compliant server without custom integration code. If your data sources expose MCP servers, any compliant model or agent can query them.

Morgan Stanley reportedly cut API deployment time from two years to two weeks by combining MCP with a structured workflow framework for their internal systems. That is not a marginal improvement. That’s a signal that something structural has shifted.

The Protocol Is Not the Hard Part

Here is what the MCP tutorials don’t tell you: defining a tool is trivial. Writing a schema and returning JSON is not engineering. The hard part is everything the REST evangelists also failed to communicate in 2008.

Granularity. REST resources and MCP tools have the same granularity problem. Too coarse, and the model has to call a tool that returns 10,000 rows when it needs 3. Too fine, and it takes 40 tool calls to accomplish what should be one. The model pays for every round-trip in latency and token cost. Get granularity wrong and your agent loop becomes a crawl.

Context pollution. Every tool response goes back into the context window. Verbose responses — full objects with every field, deep nested structures, error traces embedded in data — bloat context fast. In a multi-step agent loop, this compounds. Design MCP tool responses the way you’d design an API response for a bandwidth-constrained mobile client: return only what was asked for, nothing more.

Authorization at the wrong layer. The convenience of MCP makes it easy to expose everything through a single server and let the model figure out what it should access. This is how you end up with an agent that can read your production database because the prompt didn’t tell it not to. Authorization belongs in the server, not the system prompt. Tools should only expose what the authenticated caller is allowed to see, period.

Treating tools as functions, not resources. The same mistake that turned REST into RPC is appearing in MCP: naming tools getCustomerById, searchOrdersByDate, fetchUserPreferences — verbs masquerading as resources. The model has to read tool descriptions to understand capabilities. When your tool catalog looks like a flat list of imperative verbs, the model’s tool selection degrades. Organize tools around domains and capabilities, not around your internal service boundaries.

The Integration Layer Nobody Asked For

Part of why MCP is spreading fast is that it’s filling a gap that enterprises have been papering over for years. Large organizations have dozens of internal APIs, data sources, and tools, all with different auth schemes, response formats, and access patterns. Connecting an LLM to each of them individually is O(n) integration work — one connector per system. MCP makes it O(1) at the model layer: implement the protocol once on each data source, and any compliant model can query it.

This is exactly the promise the API gateway vendors made, and partly delivered, in the 2015-2020 era. MCP extends that promise to the model tier. The difference is that the consumer is now a language model that benefits from structured, typed interfaces in ways that are qualitatively different from a human developer reading docs.

The second-order effect is what matters: once your internal systems expose MCP servers, you can swap the model layer independently. You’re not locked into a particular LLM because all your integration logic lives in the protocol, not in model-specific tooling. That’s genuinely valuable, and it’s the kind of thing that makes CTOs sign off on infra projects.

Security Is the Gap Everyone Will Hit

There’s a failure mode in MCP that’s already showing up in early production deployments and will become a serious incident within the year: prompt injection through tool responses.

The flow looks like this: agent calls an MCP tool that reads from a user-controlled data source (email, documents, web content). The response contains attacker-crafted instructions embedded in the data. The model, unable to distinguish instructions from data, executes them. The next tool call the model makes is the one the attacker wanted.

This is the “lethal trifecta” pattern — untrusted input, sensitive access, and action capability all converging in one agent loop. MCP makes it structurally easier to build systems that have all three properties without realizing it.

Mitigation is not optional and it’s not simple. You need strict output validation on tool responses before they’re returned to the model, sandboxed execution contexts for tools that touch external data, and rate-limiting on action tools (tools that write, send, or modify state). None of this is built into the protocol by default. It’s your problem.

What Good MCP Architecture Looks Like

A few principles that hold up under production load:

Separate read and write tool surfaces. Your query tools (read access, data retrieval) and your action tools (write, send, mutate) should be distinct MCP servers with separate auth. An agent doing research doesn’t need write tools loaded. Scope what’s available to what’s needed for the task.

Keep tool responses flat and typed. Return typed primitives and flat structures where possible. Avoid embedding blobs, raw HTML, or large prose in tool responses. If you need to return prose, summarize it server-side before returning. The model doesn’t need the full document — it needs the answer.

Version your tool schemas. REST APIs that didn’t version broke everything downstream when they changed. MCP tool schemas will change as your data models evolve. Version them from day one. The cost of adding a version field is zero. The cost of a breaking schema change in a deployed agent system is not.

Test tool selection, not just tool execution. Your evals need to include cases where the model chooses the wrong tool, calls a tool unnecessarily, or fails to call a tool it needs. The tool execution path is easy to unit test. The selection path is where agent systems actually fail in production.

The Window Is Narrow

MCP is becoming the default. It’s already in enterprise pilots at financial institutions, built into developer tooling, and showing up in infrastructure platforms. The teams that understand the protocol deeply — its constraints, its failure modes, its architectural implications — will build systems that hold up. The teams that treat it as a configuration step before the “real” AI work will build systems that look fine in demos and fail in production.

REST rewarded the teams that understood HTTP semantics and ignored the ones who just wanted their JSON to work. MCP will do the same. The protocol is a contract. Take it seriously.