The real state of MCP servers in 2026
67,000+ MCP servers exist across public registries. 38.7% require no authentication. 30 CVEs filed in 60 days. MCP is powerful, and the ecosystem is a mess.
Model Context Protocol is the most important infrastructure development in AI tooling since function calling. It lets AI models connect to external services — databases, APIs, file systems, SaaS tools — through a standardized interface. Anthropic created it, but OpenAI, Google, and Microsoft all adopted it. The SDK gets 97 million monthly downloads. Every serious AI coding tool supports it.
MCP won. And the ecosystem that grew around it is in trouble.
The numbers
Let's start with the raw scale. Depending on which registry you ask, there are between 8,000 and 67,000 MCP servers in existence:
- 67,057 servers across 6 public registries, per academic analysis
- 8,013 servers with completed analysis on PulseMCP's curated index
- 4,133 servers on SkillsIndex (up from 425 in mid-2025 — an 873% increase in 10 months)
- 1,864 servers tracked with usage data on FastMCP
What these servers actually do
The median MCP server exposes just 5 tools. Nearly half expose 4 or fewer. Of the 4,126 total tools Bloomberry found across their sample, 52% were read operations and 25% were write operations. The most common tools are generic utilities: search, fetch, ping.
The company profile is surprising: 81% of MCP server operators have fewer than 200 employees. 50% of companies with an MCP server don't have a public-facing API. MCP is their first machine-readable interface. That's both exciting (MCP is unlocking programmatic access to tools that never had APIs) and terrifying (many of these servers were built by teams with no API security experience).
AWS hosts 60% of MCP servers, Google Cloud 12%, Azure 7%. Vercel shows disproportionate adoption at 5% — compared to 2% for traditional APIs — reflecting MCP's strong developer-tool roots. 70% come from B2B companies, but the long tail includes everything from footwear retailers to government agencies.
The security situation is bad
Between January and February 2026, security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure. These weren't exotic zero-days. They were missing input validation, absent authentication, and blind trust in tool descriptions. The boring stuff.
Here's the breakdown of those 30 CVEs:
| Category | Share | What it means |
|---|---|---|
| Command/shell injection | 43% | User input passed to shell commands without sanitization |
| Tooling infrastructure | 20% | Flaws in MCP clients, inspectors, and proxies |
| Authentication bypass | 13% | Missing or broken auth |
| Path traversal | 10% | Reading files outside the intended directory |
| Other (SSRF, supply chain) | 14% | Cross-tenant exposure, malicious impersonation |
mcp-remote with a CVSS score of 9.6, affecting a package with nearly 437,000 downloads. It was described as "the first fully documented MCP RCE in the wild."
But the individual CVEs aren't the real story. The systemic findings are:
- 38.7% of MCP servers require no authentication (Bloomberry)
- 22.9% have unrestricted CORS policies — any origin can make requests
- Only 2.4% implement rate limiting
- 82% of 2,614 implementations use file operations vulnerable to path traversal (security survey)
- 67% have some form of code injection risk
If the people who created the protocol shipped a vulnerable reference implementation, what chance does a footwear retailer have?
Why MCP security is structurally hard
MCP's security challenges aren't bugs. They're architectural properties.
MCP servers are trust amplifiers. When a user installs an MCP server, they're giving their AI agent access to a capability. The agent can then invoke that capability based on natural language instructions. The user approved the installation, but they didn't approve every individual invocation. This is the same "ambient authority" problem that operating systems solved decades ago with permission models — but MCP doesn't have one yet. Tool descriptions are a prompt injection surface. An MCP server declares its tools via JSON descriptions. Those descriptions are read by the AI model. A malicious server can embed instructions in its tool descriptions that manipulate the model's behavior. CVE-2025-54136 ("MCPoison") demonstrated this against Cursor: a server could silently update its tool descriptions after initial approval, injecting new behavior without user notification. There's no code signing or content verification. When younpm install an MCP server, you're trusting the package author. There's no registry-level review, no static analysis gate, no sandbox. The Postmark MCP supply chain attack demonstrated this: a malicious package impersonated a legitimate email service provider's MCP server, and users installed it because the name looked right.
Authentication is optional and usually absent. The MCP spec doesn't require authentication. Server authors can implement it, but 38.7% don't. For servers that handle sensitive operations — fund transfers, interview scheduling, file management — this is indefensible.
What's being done about it
The community response has been significant but fragmented:
- Anthropic published security best practices for MCP server development
- The Vulnerable MCP Project maintains a comprehensive security database
- AgentSeal validated 6 high-profile vulnerabilities with working exploits, moving beyond static analysis
- Endor Labs published a detailed appsec analysis arguing MCP needs traditional application security practices
- Red Hat published a controls framework for enterprise MCP deployment
What we think needs to happen
We're going to be opinionated here, because this matters.
1. MCP servers need quality gates before distribution, not after exploitation.The current model is: publish anything, let users discover problems. This worked for npm packages when the worst case was a broken build. When the worst case is exfiltrating credentials through a prompt injection, "publish first, audit later" is not acceptable.
2. Security scanning needs to be automated and continuous.Static analysis can catch the 43% of vulnerabilities that are command injection and the 10% that are path traversal. That's more than half the CVEs from January-February 2026. It's not a research problem. It's an infrastructure problem.
3. Tool descriptions should be immutable after approval.The MCPoison attack worked because servers could update their tool descriptions silently. Once a user approves a tool, the description they approved should be locked. Any change should require re-approval.
4. Authentication should be mandatory for servers that write data.Read-only MCP servers (documentation lookups, search) have a different risk profile than servers that can send emails, transfer funds, or modify files. The spec should distinguish between these and require authentication for anything with write capabilities.
Where RuleSell fits
We built RuleSell to be the quality layer the MCP ecosystem is missing. Every MCP server listed on RuleSell goes through our automated quality pipeline:
- Security scan: Static analysis for command injection, path traversal, SSRF, and authentication bypass. This alone would have caught the majority of the 30 CVEs filed in early 2026.
- Schema cleanliness: We validate the MCP capability definition against the spec. Surprising how many servers declare tools that don't match their actual implementation.
- Install success rate: We test first-run installs on clean environments. If it doesn't work out of the box, it doesn't ship.
- Freshness tracking: Stale MCP servers rot. We track update frequency and surface it in the Quality Score.
We're not trying to replace the registries. We're trying to be the filter between the registry and your dev environment. The place where you can install an MCP server and know it's been scanned, tested, and verified.
Browse verified MCP servers on RuleSell, or read about our quality scoring model.
If you build MCP servers, learn how we score quality and avoid the anti-patterns we reject. For the broader context on why prompt and config quality matters more than people think, read Why prompt engineering matters more than model selection. Explore the full catalog to see what's already verified.