Now in beta — 1,000 free requests/month

Turn any API into an MCP server. Instantly.

One URL. Zero configuration. Your LLM gets tools for every endpoint — fully hosted on a global edge network.

claude_desktop_config.json
{
  "mcpServers": {
    "any-api": {
      "url": "https://flashmcp.dev/api.example.com"
    }
  }
}

That's it. No SDK. No server. No build step.

Works with every MCP client

Claude Desktop Claude Code Cursor Windsurf VS Code Any MCP Client

Three steps. Zero complexity.

FlashMCP handles the hard parts — spec discovery, schema parsing, hosting, caching, routing — so you don't have to.

1

Point to any API

Add a single URL to your MCP client config. Just prepend flashmcp.dev/ to any API hostname.

2

We handle the magic

FlashMCP automatically discovers the API spec, parses every endpoint, resolves complex schemas, and builds optimized tool definitions for your LLM.

3

Your LLM has superpowers

Every API endpoint becomes a callable tool. Your LLM can read, create, update, and delete resources — with perfectly typed parameters.

Everything you need. Nothing you don't.

A fully-managed MCP gateway that works with any API, any LLM client, and any authentication method.

🔎

Automatic spec discovery

FlashMCP intelligently discovers your API's OpenAPI specification. No manual configuration needed for thousands of popular APIs.

🌐

2,500+ APIs ready

Pre-indexed directory of over 2,500 APIs across 677 providers. Point and connect — specs are resolved instantly from our global catalog.

Fully hosted

No servers to deploy. No Docker. No Node.js. No Python. FlashMCP runs on a global edge network — always on, always fast.

🔒

Auth passthrough

Your API keys and tokens are forwarded securely to the upstream API. Authorization, X-API-Key, and custom headers — all supported.

🚀

All HTTP methods

Full CRUD support. GET, POST, PUT, PATCH, DELETE — every operation in the spec becomes a callable tool with typed parameters.

🎨

Rich responses

JSON, markdown, images, audio — responses are automatically formatted into native MCP content blocks your LLM understands.

📈

Edge caching

API specs are cached at the edge for blazing-fast repeated requests. Sub-millisecond spec resolution on cache hits.

📑

Smart pagination

Large APIs with hundreds of endpoints are automatically paginated. MCP clients fetch pages seamlessly — no tool overload.

🧰

LLM-optimized schemas

Parameters are flattened into simple, top-level schemas. Your LLM calls create({name, status}) — no nesting.

Stop deploying MCP servers.

Traditional MCP servers require you to run a local process, manage dependencies, handle updates, and debug connectivity issues. FlashMCP eliminates all of that.

No infrastructure

No Docker containers, no process managers, no port conflicts. Just a URL.

300+ edge locations

Deployed globally. Requests are routed to the nearest edge node for minimal latency.

Always current

Specs are re-fetched automatically. When an API adds endpoints, your tools update too.

Works everywhere

Any MCP client that supports Streamable HTTP can connect. No local setup per machine.

✗ Without FlashMCP
# Install dependencies
npm install express @modelcontextprotocol/sdk
# Write the MCP server (200+ lines)
vim server.ts
# Parse OpenAPI spec manually
# Handle $ref resolution
# Map endpoints to tools
# Build input schemas
# Handle auth forwarding
# Handle pagination
# Handle errors
# Deploy somewhere
docker build -t my-mcp-server .
docker run -p 3000:3000 my-mcp-server
✓ With FlashMCP
{
  "mcpServers": {
    "my-api": {
      "url": "https://flashmcp.dev/api.example.com"
    }
  }
}
// That's it. Done. 🙌

Connect your LLM to anything

Any REST API with an OpenAPI spec. Any MCP-compatible client. Any workflow.

💻 SaaS platforms

Stripe, Twilio, SendGrid, Slack — give your LLM access to your entire stack.

🏗️ Internal APIs

Connect to your company's internal services. If it has an OpenAPI spec, it works.

📊 Data platforms

Query analytics APIs, fetch dashboards, pull reports — all through natural language.

🛠️ DevOps tools

GitHub, Jira, PagerDuty, Datadog — let your LLM manage your dev workflow.

Simple, usage-based pricing

Pay for what you use. No per-seat charges. No hidden fees. One price for every API.

Pay as you go

$1

per 1,000 requests


1,000 free requests/month
All APIs. Same price.
No per-seat charges
Full API dashboard
Usage analytics
Start for free

No credit card required

View full pricing details →

Ready to connect any API?

Start with 1,000 free requests. No credit card. No setup. Your LLM gets tools in under 60 seconds.

Get started free Read the docs