Topic · A2
MCP Servers: What They Are (in One Sentence) and Which to Install
An MCP server is a process that exposes tools, resources, and prompts to an LLM client over a defined wire protocol. Here's what to install — and what to skip.
# MCP Servers: What They Are and Which to Install An MCP server is a process that exposes a typed set of tools, resources, and prompts to an LLM client over a defined wire protocol. That's the whole concept. The rest is "which transport," "which client," and "which server is worth your context budget." If you came here from Google searching "what is an mcp server," you've found the one-sentence answer. The rest of this page is the practical version: how to install one, which to install first, and what the official docs gloss over.
The wire protocol, the trio, and the transports
Three concepts to internalize. The trio. Every MCP server can expose three kinds of capabilities: tools (functions the LLM can call), resources (read-only data the LLM can reference), and prompts (templates the user can invoke). Most servers ship tools only. Notion's server ships all three. The transports. MCP supports three: stdio (local subprocess, JSON-RPC over stdin/stdout), Streamable HTTP (remote, single endpoint, streams responses), and SSE (Server-Sent Events; deprecated as of 2025). For a new remote server, you write Streamable HTTP. Old tutorials that tell you to write SSE are out of date. The clients. As of 2026 the major MCP clients are Claude Desktop, Claude Code, Cursor, VS Code (Copilot Chat), Windsurf, and ChatGPT (Developer Mode). Each implements the protocol slightly differently in terms of permission UX, but the wire is the same.Installing your first MCP server
Pick one server. Don't pick eight — see /topic/mcp-tool-overload for why. For most readers the right first server is microsoft/playwright-mcp (browser automation) or github/github-mcp-server (repo work). Both are vendor-maintained, both have OAuth or PAT-based auth that doesn't require chmod tricks.In Claude Code
``bash
claude mcp add playwright -- npx @playwright/mcp@latest
`
Confirm with claude mcp list. The full reference is at code.claude.com/docs/en/mcp.
In Claude Desktop
Edit claude_desktop_config.json (location varies by OS). Add:
`json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
`
Restart Claude Desktop. The "MCP Servers" indicator in the chat UI should show 1 connected.
In Cursor
Cursor's MCP support went GA in 2025. Open Cursor → Settings → MCP, add a new server with the same npx @playwright/mcp@latest command. Cursor's permission UX is per-tool, not per-server — expect more prompts than Claude Code, more granular than Claude Desktop.
Verifying it works
Use the MCP Inspector — Anthropic's official debugger. It speaks the protocol directly, so you can probe a server without an LLM. If a server works in Inspector but fails in your client, the bug is in the client integration, not the server. This single trick saves hours.
What to install, in order
The honest order if you're starting from zero.
- One vendor server you use daily. GitHub if you commit code, Notion if you write docs, Linear if you ship tickets.
- One filesystem or browser server. Filesystem (Anthropic reference, post-EscapeRoute fix) for "let the agent read my repo." Playwright for "let the agent test my UI."
- One search server. Brave Search for general web, Exa if you're doing research, Tavily if you're building a RAG app.
That's three. Stop there. If you need to add a fourth, audit the cost first — claude mcp exposes token usage per server. A server that costs you 8,000 tokens at session start and you used twice last week is a deny rule waiting to happen.
The full curated list is at /topic/best-mcp-servers-2026 — 18 servers we'd trust in 2026, plus the ones we wouldn't.
A note on the four major clients
Worth a paragraph each, because the integration UX varies meaningfully and decides which servers feel "free" vs "expensive" on a given workflow.
Claude Code has the most mature CLI surface. claude mcp add, claude mcp list, claude mcp remove, and claude /context make audit trivial. Server-level approval is the default; per-tool approval is opt-in. This is the client we recommend for anyone treating MCP setup as something to maintain over time.
Claude Desktop is the easiest first-install client. Config is JSON in claude_desktop_config.json; the UI confirms connections; OAuth servers work out of the box. The downside is that audit and pruning are manual — there's no per-server token cost in the UI, so heavy stacks get expensive without warning.
Cursor treats MCP the way it treats Cursor Rules: a separate primitive, configured in Settings, with its own permission UX. Per-tool prompts are the default, which is more secure and more verbose. Cursor's MCP support went GA in 2025 and is feature-equivalent to Claude Code's, with different defaults around prompting.
ChatGPT Developer Mode is the latecomer. Setup at developers.openai.com/api/docs/mcp. The integration is fully functional but the catalog of pre-configured servers is smaller than Claude's, and the permission UX is the least granular of the four. Reasonable choice if you're already in ChatGPT for other reasons.
What MCP servers don't do
Three persistent misconceptions worth correcting.
They are not a security boundary. An MCP server runs with whatever permissions you gave it (file access, network access, OAuth scopes). The client prompts you before running tools, but if you click through, the server can do whatever its code does. Trust the maintainer, not the protocol.
They don't replace function calling. Function calling is still how the LLM talks to the server underneath. MCP standardizes the discovery layer (how the LLM finds out what tools exist) and the transport layer (how the calls travel). The tool execution itself is the same. We cover the comparison at /vs/mcp-vs-function-calling.
They are not free in tokens. A single MCP server adds 5,000–15,000 tokens to your context window before you say anything. Five servers commonly push past 66,000 — a third of a Sonnet 4.5 context window. This is the tool-overload problem, and it's the #1 reason MCP setups get slower over time.
Where this fails
Three places we've seen MCP setups break.
1. Schema drift. A server updates its tool descriptions, the LLM's mental model of what's available goes stale, you get hallucinated tool calls. Fix: restart the client after server updates. The protocol has a tools/list_changed notification but not every client implements it.
2. OAuth fatigue. A remote server's OAuth token expires mid-session, the client doesn't surface the error well, you spend 20 minutes debugging "why is the agent ignoring me." Check the client logs first. The Stack Overflow blog on MCP auth confusion is honest about how leaky-by-default these flows are.
3. The "I installed 8 servers" tax. Covered above. Three is the answer. Five is the ceiling.
What to read next
- /topic/best-mcp-servers-2026 — the 18 we'd trust.
- /topic/mcp-security — what 66% security findings means in practice.
- /topic/mcp-tool-overload — why three.
- /topic/paid-mcp-servers — the monetized servers that are emerging.
- /for/claude-code — install Claude Code, the most MCP-mature client.
- /for/cursor — install Cursor, the second most MCP-mature client.
Sources
- Anthropic / modelcontextprotocol.io. Spec 2025-03-26: Transports.
modelcontextprotocol/servers` reference repo — 85.5k stars, only 7 reference servers remain.
- Anthropic. Claude Code MCP docs.
- MCP Inspector. Official debugging tool docs.
- Stack Overflow blog. "Is that allowed? Authentication and authorization in MCP", January 2026.
- Eclipsesource. Context overload findings.
- OpenAI. MCP developer docs for ChatGPT.
- k2view. Awesome MCP servers summary — 97M monthly SDK downloads (March 2026).
Related GitHub projects
Frequently asked
- What is an MCP server, in one sentence?
- An MCP server is a process that exposes a typed set of tools, resources, and prompts to an LLM client (Claude Code, Cursor, ChatGPT, VS Code) over a defined wire protocol — stdio for local, Streamable HTTP for remote. Anthropic calls it 'USB-C for AI' on modelcontextprotocol.io; the analogy is fair if you accept that USB-C also has security caveats.
- Do I need to be a developer to use MCP servers?
- To install and use, no — Claude Desktop ships first-class MCP support with a UI for adding servers. To run your own server, yes, you'll write code (Python via FastMCP or TypeScript via the official SDK). The most-used MCP servers in 2026 are vendor-provided remotes (Stripe, Linear, Notion, Sentry) that require nothing past OAuth click-through.
- How is MCP different from a REST API?
- Three differences. (1) MCP is discoverable — the client asks the server what tools exist at connect time, you don't have to hand-code OpenAPI clients. (2) MCP is LLM-native — tool descriptions are written for a model to read, not a developer. (3) MCP has a permission model — clients prompt users before running individual tools, which REST doesn't define. The flip side: MCP carries the schema cost in every session, which is why tool-overload is a real pain (see /topic/mcp-tool-overload).
- Does ChatGPT support MCP?
- Yes, as of late 2025 via Developer Mode on ChatGPT Plus and Enterprise. Setup docs at developers.openai.com/api/docs/mcp. Cursor and VS Code Copilot Chat support it natively. Claude Desktop and Claude Code support it as a first-class primitive.
- Can I run MCP servers on Windows?
- Yes. The reference SDKs (Python, TypeScript) run on Windows directly. stdio-transport servers run as subprocesses Claude Desktop spawns. The one wrinkle: some servers shipped by macOS-first authors hard-code Unix paths in their install instructions — substitute with the Windows equivalent or run inside WSL.
- What's stdio vs SSE vs Streamable HTTP?
- Three transport options from the MCP spec. stdio = process spawns a subprocess and reads/writes JSON-RPC over stdin/stdout (local-only, simplest, default). Streamable HTTP = single HTTP endpoint that streams responses (current spec for remote servers, replaces SSE). SSE = Server-Sent Events, the legacy remote transport, deprecated as of mid-2025 per fka.dev. If you're starting a new remote server in 2026, write Streamable HTTP.