O pen the modelcontextprotocol.io home page [1] and you will not find a manifesto. You will find an RFC index, an SDK matrix in seven languages, and a list of 10,000+ public servers. Sixteen months after Anthropic announced it on November 25, 2024 [2], MCP has become what its early backers said it would: a JSON-RPC interface for letting models call tools and read data, governed in public, used by everyone with a chat product.
This pillar is the working version of the field map. It covers the four roles in production, the auth spec evolution that finally landed in March 2026 [3], the gateway category that AWS and Microsoft both shipped this year, and the questions a platform team should ask before its third MCP server.
The three protocol roles, plus the one ops added
The spec defines three participants [1]. A client renders the conversation or workflow to the user: Claude Desktop, Cursor, ChatGPT, Microsoft Copilot, and roughly two dozen others. A server exposes data, prompts, or tools through a JSON-RPC surface. A host runs the model and orchestrates tool calls. Any of the three can be swapped independently. In practice, enterprises rarely want all three swappable; the procurement value is in pinning two and varying one.
The emergent fourth role is the gateway. A gateway is an authorization- and observability-aware proxy that sits between a set of clients and a set of MCP servers. It enforces policy on every tool call, logs the call to a system the audit team can read, and brokers the credentials that the end user should never paste into a prompt. AWS shipped one as AgentCore Gateway [5]. Cloudflare ships one as MCP Server Portals [7]. Most platform teams that buy neither end up building one.
| Role | Spec status | Responsibility | Who owns it |
|---|---|---|---|
| Client | Spec-defined | Renders the conversation, collects input, presents tool affordances | Product team or vendor |
| Host | Spec-defined | Runs the model; orchestrates the tool-call loop | Platform team |
| Server | Spec-defined | Implements the JSON-RPC surface; exposes data, prompts, tools | Domain team (HR, Finance, IT) |
| Gateway | Emergent | Authenticates, authorizes, logs, rate-limits across many servers | Platform + Security, jointly |
What shipped between November 2024 and April 2026
Twelve months of unusually orderly releases. The original Anthropic announcement landed November 25, 2024, with reference servers for Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer, and SDKs in Python, TypeScript, C#, and Java [2]. Block and Apollo were the first two named enterprise adopters in the same announcement.
In March 2025 the spec gained OAuth 2.1 authorization. In June 2025 it formally split MCP servers from authorization servers and required Protected Resource Metadata under RFC 9728 [8]. In November 2025 it adopted Client ID Metadata Documents and made PKCE non-negotiable for every client. Each revision survived a public RFC; none broke the wire format.
On December 9, 2025, Anthropic announced the donation of MCP to a new Agentic AI Foundation under the Linux Foundation, co-founded with Block and OpenAI [4]. Google, Microsoft, AWS, Cloudflare, Bloomberg, and Intuit signed on as supporting members. The neutral-governance question that some CISOs had been quietly raising in security reviews stopped being a question.
"The wire format barely moved in sixteen months. The auth surface changed almost every quarter, and that is the part that determined whether you could deploy this in a regulated environment."
Authorization, the part security review reads
If your security team has been asking for the auth model in writing, the answer in April 2026 is straightforward enough to pin to a slide. An MCP server acts as an OAuth 2.1 resource server. The MCP client is an OAuth 2.1 client making protected-resource requests on behalf of the user. The host runtime never sees long-lived credentials. [3]
Three concrete things changed in the last six months that matter for procurement. First, the November 2025 spec made PKCE mandatory across all clients (not optional, not configurable). Second, Client ID Metadata Documents replaced ad hoc client registration, which is the change that lets you publish a single client manifest and have every MCP server you talk to validate it identically. Third, the March 15, 2026 revision mandated RFC 8707 resource indicators to prevent token mis-redemption (the attack class where a token issued for tool A is replayed against tool B). [3]
If you are evaluating MCP servers from third parties, the short due-diligence list: do they implement Protected Resource Metadata under RFC 9728, do they enforce PKCE, and do they validate the resource indicator on every token. A vendor that cannot answer these three questions in writing has not read the current spec.
Gateways: where most of the 2026 budget is going
Every platform team with more than three MCP servers ends up building or buying a gateway. The reasons are unexciting: per-tool authorization, a single point at which to log every call, a centralized rate-limit, and a hard boundary at which to enforce egress rules. The MCP spec does not require a gateway. Production deployment patterns do.
The vendor list grew quickly. AWS Bedrock AgentCore Gateway became GA on October 13, 2025, in nine regions, and now connects to MCP servers as named targets [5]. Cloudflare's MCP Server Portals consolidate authorized servers behind a single endpoint [7]. The independent landscape includes Kong, Composio, Bifrost, and Zuplo, plus a long tail of in-house gateways at companies that started before the products existed. Most options shipped between September 2025 and Q1 2026. Assume the comparison shifts every quarter.
- 01
Tool-level authorization, written as policy
Which users, under which conditions, can invoke which tools on which servers. Versioned, reviewable, expressed as code (Cedar in AgentCore's case).
- 02
Credential brokering
The gateway holds the Snowflake PAT, the Jira token, the SharePoint secret. The user never sees them; the model never receives them in clear text.
- 03
Call observability
Per-tool latency, per-tool error rate, per-tool outcome flag. Traceable back to a conversation ID and, where possible, to a business event in the calling system.
- 04
Egress enforcement
For regulated workloads, the gateway is the chokepoint that proves PII never left a region and free-form text was never exfiltrated through a tool call.
If your team is starting MCP work this quarter
Pick one workflow that already has a real metric. Stand up one MCP server for it. Front the server with whichever gateway your procurement team can get through fastest; that almost certainly means whichever cloud you already buy from, which means AgentCore Gateway on AWS or the Foundry MCP Server on Azure. Instrument every tool call with an outcome flag from the first deployment. The programs that scaled in 2025 looked like this. The ones that did not scaled the registry first and the use case second.
Two things to skip in the first sixty days. Do not try to publish a server catalog before you have a working green call graph for the first server. Do not write your own authorization layer; use the spec's OAuth 2.1 + RFC 8707 model and the gateway's enforcement of it. Both shortcuts seem like accelerators and almost always cost a quarter.
Frequently asked
-
What is the Model Context Protocol (MCP)?
MCP is an open JSON-RPC specification, originally released by Anthropic on November 25, 2024 [2], that lets a language-model client call tools and read data from an external server in a uniform way. It defines three roles: client (the app), server (the tool/data surface), and host (the model runtime). The protocol was donated to the Agentic AI Foundation under the Linux Foundation on December 9, 2025 [4]. -
Who governs MCP today?
Since December 9, 2025, MCP is governed by the Agentic AI Foundation (AAIF), a directed fund hosted by the Linux Foundation. The AAIF was co-founded by Anthropic, Block, and OpenAI, with supporting members including Google, Microsoft, AWS, Cloudflare, Bloomberg, and Intuit [4]. Day-to-day technical direction stays with the project's maintainers; the foundation handles funding, IP, and trademarks. -
How does MCP authentication and authorization work?
An MCP server acts as an OAuth 2.1 resource server; the MCP client is an OAuth 2.1 client making requests on behalf of the user. As of the March 15, 2026 spec revision, three things are mandatory: PKCE on every client, Protected Resource Metadata under RFC 9728, and resource indicators under RFC 8707 to prevent token mis-redemption between tools [3]. Both AWS AgentCore Gateway and Microsoft Foundry MCP Server align to this spec rather than implementing custom auth. -
Do I need an MCP gateway?
Once you operate more than three MCP servers in the same environment, the answer is almost always yes. Gateways centralize per-tool authorization, log every tool call, broker enterprise credentials so the user never pastes tokens into prompts, and enforce egress rules for regulated data. AWS Bedrock AgentCore Gateway (GA October 13, 2025) [5] and Cloudflare MCP Server Portals [7] are the two cloud-native options. Independent vendors include Kong, Composio, Bifrost, and Zuplo. -
How is MCP different from RAG?
RAG (Retrieval-Augmented Generation) is a pattern for grounding a model's response in a retrieved document set; MCP is a wire protocol for letting a model call tools and fetch context. They solve different problems and are usually used together. A full comparison with implementation guidance is at /comparisons/mcp-vs-rag. -
Which AI clients support MCP?
As of December 2025, MCP has first-class client support across Claude (Anthropic), ChatGPT (OpenAI), Cursor, Gemini, Microsoft Copilot, Visual Studio Code, Visual Studio 2026 Insiders, and approximately twenty other clients [4]. The protocol's monthly SDK downloads passed 97 million in the same window. Adoption is broad enough that vendor-lock concerns no longer block enterprise procurement.