The open-source way to build MCP servers people actually adopt
Design tools visually, deploy hosted runtime endpoints, and prove real workflows with target-aware Try Chat. Make MCP helps teams go from API spec to production-ready MCP in minutes.
Why should you use Make MCP?
Optimized for faster prototyping, safer shipping, and easier demos to internal teams or OSS users.
Ship Faster
Replace boilerplate scaffolding with visual tool configuration and generated TypeScript runtime.
Compose Systems
Merge multiple servers into opinionated compositions for real agent use cases.
Observe Runtime
Track hosted sessions, health, and tool execution traces with built-in observability hooks.
Grow Adoption
Try Chat makes demos concrete by showing tool calls, arguments, and outcomes live.
The Governance Model
Production AI tool access needs the same rigor as cloud infrastructure: identity, portability, and an audit trail—not just prompts and completions.
Identity-aware context
Hosted MCP endpoints support Bearer token access and optional Caller API keys so requests are tied to a principal—not an anonymous LLM session. That combination limits who can invoke tools against sensitive backends and supports per-caller attribution in policy and observability.
Infrastructure portability
Docker as the unit of work for hosted runtimes: build once, ship a constrained container, run anywhere your platform allows. The same IaaS mindset you expect from cloud services—reproducible artifacts and predictable boundaries—applies to how MCP servers execute.
Audit logging for AI tools
Observability, reframed for security leaders
Observability captures who called which tool, with what outcome and latency—so you can answer compliance and incident questions: “What did the agent do?” and “Was it allowed?” That turns operational telemetry into the audit trail teams need when AI touches production systems.
Everything needed to launch and prove MCP value
Build tools, secure them, deploy hosted endpoints, and validate agent behavior against real runtimes.
Low-code / No-code
Get a working MCP server in under 5 minutes. Configure tools, resources, and prompts in the UI—no YAML, no server code to write. The platform generates a runnable server you can plug into Cursor or any MCP client.
- Tool editor: schema, execution config (REST, CLI, DB, etc.), and auth—all in the UI
- Resources and prompts live next to your tools; context (user_id, org, roles) wired per tool
- Export to Node or Docker when you’re ready; no boilerplate to maintain
Execution types
REST, GraphQL, Webhooks, CLI, Database, JavaScript, Python—and visual flows that chain them.
- REST / GraphQL / Webhook — URL, method, headers; plug in your API
- CLI — kubectl, docker, terraform with allowlists
- Database — connection string + query templates
- Visual Flow — pipeline → tool for multi-step workflows
Auth
Configure per-tool auth in the UI. App sign-in is passkey-only (WebAuthn).
- API Key, Bearer token, Basic Auth, OAuth 2.0
- Credentials stored server-side; merged into tool requests at runtime
- No passwords for the builder—passkeys only
Testing
Run tools from the UI against Dev, Staging, or Prod. No redeploy to switch environments.
- Environment dropdown: pick base URL and DB URL per profile
- Mock input + simulated context; see JSON, table, or card output
- Per-tool presets (save/load input + context), dry-run for destructive tools
Export & deploy
Generate a runnable server. Choose target env when you build—URLs are baked in or left in .env.
- Node.js — ZIP with TypeScript, run-with-log script, Cursor-ready
- Docker — Dockerfile + compose, non-root
- GitHub — Push to a repo in one click
- Target environment (Dev / Staging / Prod) at generate time
Hosted runtime
Publish directly to managed containers and use a stable hosted endpoint per user + server.
- Container-per-user/server model with CPU/memory/pid limits
- Reverse proxy with SSE-safe streaming and no buffering
- Session controls: list, health, restart, stop from Observability
Try Chat
Run a target-bound LLM chat session against deployed Server, Marketplace item, or Composition.
- Provider switch via config (OpenAI-compatible providers, including Groq)
- Shows tool calls, arguments, latency, success/failure
- Great for demos, QA, and onboarding other developers
Compositions
Combine multiple MCP servers into one. One config, one export.
- Select servers from your dashboard; prefix tool names or merge as-is
- Merge resources and prompts; export a single Node/Docker package
- Use case: Stripe + Salesforce + Slack → one “Sales Agent” MCP
Governance
Policy engine so tools aren’t a free-for-all. Rate limits, roles, time windows, approvals.
- Policies per tool: allowed roles, rate limits, time windows
- Approval flows for sensitive actions
- Test tab shows policy decision (allowed / denied / approval required)
Security score
SlowMist MCP Security Checklist baked in. See your grade before you publish.
- 0–100% score and A–F grade; checklist in the Security tab
- Input validation, tool hints, RBAC, rate limiting—all visible in the UI
- Score shown in the marketplace so consumers know what they’re getting
OpenAPI import
Paste a spec or upload a file. Every path becomes a tool. Done.
- Stripe, Slack, GitHub, or your own API—one import, full server
- Auth and schemas inferred; tweak in the builder if you want
Product Tour in Screenshots
A quick visual walkthrough. Click any card to zoom.
Works with the tools you use
Generated MCP servers run anywhere the Model Context Protocol is supported.
Get started in minutes
Run with Docker. Sign up with email and a passkey — no password.
git clone https://github.com/vdparikh/make-mcp.git
cd make-mcp
docker-compose up --build
Open http://localhost:3000, register with your email, create a passkey, then create your first server or add the demo server to explore.