What Is MCP? Model Context Protocol Explained

Last updated March 2026 · 8 min read

The Model Context Protocol (MCP) is an open standard that defines how AI models connect to external data sources and tools. Created by Anthropic in November 2024, MCP has been adopted by OpenAI, Google, Microsoft, and the Linux Foundation as the universal interface between AI assistants and the outside world.

Think of MCP as USB-C for AI — before USB-C, every device had its own proprietary connector. Before MCP, every AI integration required custom code. MCP standardizes the connection so any AI model can work with any tool through a single protocol.

The Problem MCP Solves

AI models are powerful at reasoning and generating text, but they are isolated from the real world. They cannot access your files, query your database, call your APIs, or interact with your development tools — unless someone builds a custom integration for each combination of AI model and external service.

Before MCP, if you wanted Claude to access your PostgreSQL database, you wrote a custom tool. If you also wanted GPT-4 to access the same database, you wrote another custom tool with a different API format. Each AI provider had its own tool-calling convention, and integrations were not portable between models.

MCP eliminates this fragmentation. You build one MCP server for your PostgreSQL database, and every MCP-compatible AI client — Claude, ChatGPT, Copilot, Cursor, Windsurf — can use it without modification.

How MCP Works

MCP uses a client-server architecture with three core concepts:

  • Tools — Functions the AI can call. Example: query_database(sql) executes a SQL query and returns results.
  • Resources — Data the AI can read. Example: file:///project/README.md exposes a file's contents.
  • Prompts — Reusable prompt templates. Example: code_review(diff) provides a structured code review prompt.

An MCP server exposes tools, resources, and prompts. An MCP client (like Claude Desktop or Cursor) connects to servers and presents their capabilities to the AI model. The model decides when and how to use them.

Transport Options

MCP supports multiple transport mechanisms for client-server communication:

  • stdio — The client launches the server as a local subprocess and communicates via stdin/stdout. Best for local tools (filesystem, databases, CLI commands).
  • Streamable HTTP — The server runs as an HTTP endpoint. The client sends requests and receives streaming responses. Recommended for cloud-hosted servers.
  • SSE (Server-Sent Events) — Legacy transport using HTTP + server push. Being replaced by Streamable HTTP but still widely supported.

Who Uses MCP?

As of March 2026, MCP is supported by:

  • Anthropic — Creator of MCP. Claude Desktop and the Claude API fully support MCP.
  • OpenAI — Added MCP support to ChatGPT, Agents SDK, and Responses API in March 2025.
  • Google DeepMind — Confirmed MCP support in Gemini models in April 2025.
  • Microsoft — Windows 11 has native MCP support. Copilot Studio uses MCP for tool integration.
  • IDE tools — Cursor, VS Code Copilot, Windsurf, and Continue all support MCP servers.
  • Linux Foundation — MCP was donated to the Agentic AI Foundation in December 2025, with Anthropic, OpenAI, AWS, Google, and Microsoft as platinum members.

The MCP Ecosystem

The MCP ecosystem has grown rapidly. As of early 2026, there are over 10,000 MCP servers available — from official servers for common services (GitHub, PostgreSQL, Brave Search, filesystem) to community-built servers for specialized domains (Figma, Notion, Jira, Slack, and thousands more).

The official MCP SDK is available in TypeScript and Python, with community SDKs for Rust, Go, Java, and other languages.

Getting Started

Ready to build your first MCP server? Check out our MCP Quickstart Guide. Need to configure an existing server? Use our MCP Config Generator to create the right configuration for your AI client.

Related