Model Context Protocol (MCP)
Learn what MCP is, how it standardizes tool and context access for AI systems, and when to use it instead of building custom integrations.
Model Context Protocol, or MCP, is an emerging standard for connecting models to external tools, data sources, and execution environments. It gives assistants a common way to discover capabilities instead of relying on one-off custom integrations for every app.
If tool calling answers the question "can the model use a function?", MCP answers a broader one: "how do we expose tools and context to models in a consistent, portable way?"
Overview
Without a standard, every AI integration tends to invent its own tool format, auth flow, transport, and discovery mechanism. That makes ecosystems fragmented and difficult to reuse.
MCP creates a shared protocol for:
- listing available tools and resources
- describing what those tools do
- validating inputs and outputs
- connecting over supported transports
- letting clients and assistants interact with those capabilities in a uniform way
Why this matters
MCP is important because it shifts AI integration from bespoke glue code toward reusable interfaces. That makes it easier to plug assistants into IDEs, local tools, databases, internal systems, or SaaS platforms without redesigning the entire integration each time.
Standardized access
MCP gives models and clients a common contract for tools, resources, and structured interactions.
Portable integrations
The same MCP server can potentially be used by multiple clients instead of being tightly coupled to one product.
Where it connects in these docs
MCP fits naturally alongside Tool calling, Generating text, and assistant-style Chat experiences.
MCP vs regular tool calling
These concepts are related, but they are not identical. Tool calling is the model behavior. MCP is one way to provide tools and context in a standardized format.
| Concept | What it focuses on | Typical question |
|---|---|---|
| Tool calling | Letting the model invoke external capabilities | "Can the model call this function?" |
| MCP | Standardizing how tools and context are exposed | "How should these capabilities be described and connected?" |
A simple mental model
You can think of MCP as a protocol layer between AI clients and the systems they want to use. Instead of every client speaking a different dialect, MCP gives them a shared language.
That usually means three actors:
An MCP client connects to that server and discovers what is available.
A model-enabled app uses those capabilities through the client during generation.
AI SDK example
The AI SDK has support for working with MCP clients and feeding discovered tools into generation. This example shows the general shape without tying it to any specific internal product logic.
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp/mcp-stdio";
import { generateText, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
const transport = new Experimental_StdioMCPTransport({
command: "node",
args: ["./server.js"],
});
const client = await createMCPClient({ transport });
const tools = await client.tools();
const result = await generateText({
model: openai("gpt-4o"),
tools,
prompt: "Find products under $100 and summarize the best options.",
stopWhen: stepCountIs(5),
});
await client.close();The important idea is that the model does not need hardcoded knowledge of each capability. It can discover and use tools through a consistent protocol.
When MCP is a good fit
MCP is not mandatory for every project. It shines when you want interoperability, reuse, and a cleaner separation between AI clients and backend capabilities.
Use MCP when
You want multiple AI clients to share the same tool surface, or you want to expose capabilities in a more standardized way.
Maybe skip MCP when
You only need one or two internal tools in a single app and a direct tool-calling setup is simpler.
Especially useful for
IDE assistants, internal copilots, local tooling, multi-client ecosystems, and platforms that want plug-in style extensibility.
Design considerations
Even with a protocol, good interface design still matters. MCP does not remove the need for careful tool and resource design.
Whether exposed through MCP or not, tools still need clear descriptions, good schemas, and predictable outputs.
Standardized access does not mean unrestricted access. Different tools may require different auth, scoping, or approval flows.
MCP works best when clients can rely on consistent tool names, schemas, and behavior over time.
Let MCP handle the interface layer, while your actual domain logic stays behind well-defined services.
Where MCP fits in a modern AI stack
MCP is easiest to understand when you place it in the bigger picture. It is not a replacement for models, retrieval, or prompting. It is a way to connect them to external capability surfaces.
With tool calling
MCP can supply the tools that the model chooses to call during generation.
With retrieval
An MCP server can expose resources or search interfaces that help the model get better context.
With assistants
IDE copilots, chat assistants, and agent-like systems can all benefit from a standardized integration layer.
Common misconceptions
MCP is powerful, but it helps to be clear about what it does and does not solve. That keeps teams from overcomplicating their architecture too early.
| Misconception | Better framing |
|---|---|
| "MCP replaces tool calling." | MCP is one standardized way to provide tools and context to a model. |
| "MCP automatically makes tools safe." | Safety still depends on auth, validation, permissions, and execution policy. |
| "Every AI app needs MCP." | Many apps can start with direct tools and adopt MCP later if interoperability becomes important. |
Related documentation
If you are learning this capability for the first time, the most useful follow-up is to pair it with tool calling. MCP becomes much easier to reason about when you already understand how models use tools in practice.
Tool calling
Start here if you want the practical foundation for model-driven use of external tools.
Chat
Assistant-style chat experiences are one of the most natural places to expose tool and MCP-powered capabilities.
Generating text
MCP complements generation by supplying external capabilities and context.
Learn more
These references are the best next stop if you want to understand both the protocol and how it plugs into modern AI tooling.
How is this guide?
Last updated on