Log in Get started
The universal MCP gateway - pay as you go

Turn any API into an MCP server. Instantly.

100,000+ MCPs ready to use. Or connect your own API in seconds. Fully hosted on a global edge network.

claude_desktop_config.json
{
  "mcpServers": {
    "any-api": {
      "url": "https://flashmcp.dev/api.example.com"
    }
  }
}

That's it. No SDK. No server. No build step.

Works with every MCP client

Claude Desktop Claude Code Cursor Windsurf VS Code Any MCP Client

Three steps. Zero complexity.

FlashMCP handles the hard parts - spec discovery, schema parsing, hosting, caching, routing - so you don't have to.

1

Point to any API

Add a single URL to your MCP client config. Just prepend flashmcp.dev/ to any API hostname.

2

We handle the magic

FlashMCP automatically discovers the API spec, parses every endpoint, resolves complex schemas, and builds optimized tool definitions for your LLM.

3

Your LLM has superpowers

Every API endpoint becomes a callable tool. Your LLM can read, create, update, and delete resources - with perfectly typed parameters.

Everything you need. Nothing you don't.

A fully-managed MCP gateway that works with any API, any LLM client, and any authentication method.

🔎

Automatic spec discovery

FlashMCP intelligently discovers your API's OpenAPI specification. No manual configuration needed for thousands of popular APIs.

🌐

100,000+ MCPs. Ready to use.

The world's largest MCP catalog. Over 100,000 public APIs, each instantly available as an MCP server. Pick one, connect it, done. Or bring your own API.

Browse the catalog →

Fully hosted

No servers to deploy. No Docker. No Node.js. No Python. FlashMCP runs on a global edge network - always on, always fast.

🔒

Auth passthrough

Your API keys and tokens are forwarded securely to the upstream API. Authorization, X-API-Key, and custom headers - all supported.

🚀

All HTTP methods

Full CRUD support. GET, POST, PUT, PATCH, DELETE - every operation in the spec becomes a callable tool with typed parameters.

🎨

Rich responses

JSON, markdown, images, audio - responses are automatically formatted into native MCP content blocks your LLM understands.

📈

Edge caching

API specs are cached at the edge for blazing-fast repeated requests. Sub-millisecond spec resolution on cache hits.

📑

Smart pagination

Large APIs with hundreds of endpoints are automatically paginated. MCP clients fetch pages seamlessly - no tool overload.

🧰

LLM-optimized schemas

Parameters are flattened into simple, top-level schemas. Your LLM calls create({name, status}) - no nesting.

Stop deploying MCP servers.

Traditional MCP servers require you to run a local process, manage dependencies, handle updates, and debug connectivity issues. FlashMCP eliminates all of that.

No infrastructure

No Docker containers, no process managers, no port conflicts. Just a URL.

300+ edge locations

Deployed globally. Requests are routed to the nearest edge node for minimal latency.

Always current

Specs are re-fetched automatically. When an API adds endpoints, your tools update too.

Works everywhere

Any MCP client that supports Streamable HTTP can connect. No local setup per machine.

✗ Without FlashMCP
# Install dependencies
npm install express @modelcontextprotocol/sdk
# Write the MCP server (200+ lines)
vim server.ts
# Parse OpenAPI spec manually
# Handle $ref resolution
# Map endpoints to tools
# Build input schemas
# Handle auth forwarding
# Handle pagination
# Handle errors
# Deploy somewhere
docker build -t my-mcp-server .
docker run -p 3000:3000 my-mcp-server
✓ With FlashMCP
{
  "mcpServers": {
    "my-api": {
      "url": "https://flashmcp.dev/api.example.com"
    }
  }
}
// That's it. Done. 🙌

Connect your LLM to anything

Any REST API with an OpenAPI spec. Any MCP-compatible client. Any workflow.

💻 SaaS platforms

Stripe, Twilio, SendGrid, Slack - give your LLM access to your entire stack.

🏗️ Internal APIs

Connect to your company's internal services. If it has an OpenAPI spec, it works.

📊 Data platforms

Query analytics APIs, fetch dashboards, pull reports - all through natural language.

🛠️ DevOps tools

GitHub, Jira, PagerDuty, Datadog - let your LLM manage your dev workflow.

The world's largest MCP catalog

100,000+ MCPs ready to use. Every one searchable, testable, and connected in one click. Or bring your own API.

📚

API Gallery

100,000+ MCP servers at your fingertips. Search by name or description, test any one in the Playground, and connect it to your LLM in seconds.

100,000+ MCP servers indexed and searchable
One-click "Try in Playground" for every API
Continuously updated catalog
Browse the catalog

MCP Playground

Connect to any API, explore every tool it exposes, fill in parameters, and call endpoints live - all from your browser. See exactly what your LLM will get.

Test any API as an MCP server in real time
Copy-paste config snippets for your client
Bring your own spec with ?spec=
Open Playground

Simple, usage-based pricing

Pay for what you use. No per-seat charges. No hidden fees. One price for every API.

Prepaid credits

$1

per 1,000 requests


Top up as needed - credits never expire
All APIs. Same price.
No subscriptions, no hidden fees
Full API dashboard + analytics
Starts at $5 for 5,000 requests
Get started

Top up from $5

View full pricing details →

100,000+ MCPs. Ready to use.

Browse 100,000+ ready-to-use MCPs. Or connect your own API in seconds. Your LLM gets tools instantly.

Browse 100,000+ MCPs Sign up free