- Invest in AI search as an enterprise business asset
- Samsung Galaxy S25 Ultra vs. Google Pixel 9 Pro XL: We tested both and it's a close one
- VMware Product Release Tracker (vTracker)
- This pocket-friendly AI voice recorder has changed the way I work and travel
- Is your Roku TV spying on you? Probably, but here's how to put an end to it
How to build and deliver an MCP server for production | Docker

In December of 2024, we published a blog with Anthropic about their totally new spec (back then) to run tools with AI agents: the Model Context Protocol, or MCP. Since then, we’ve seen an explosion in developer appetite to build, share, and run their tools with Agentic AI – all using MCP. We’ve seen new MCP clients pop up, and big players like Google and OpenAI committing to this standard. However, nearly immediately, early growing pains have led to friction when it comes down to actually building and using MCP tools. At the moment, we’ve hit a major bump in the road.
MCP Pain Points
- Runtime:
- Getting up and running with MCP servers is a headache for devs. The standard runtimes for MCP servers rely on a specific version of Python or NodeJS, and combining tools means managing those versions, on top of extra dependencies an MCP server may require.
- Security:
- Giving an LLM direct access to run software on the host system is unacceptable to devs outside of hobbyist environments. In the event of hallucinations or incorrect output, significant damage could be done.
- Users are asked to configure sensitive data in plaintext json files. An MCP config file contains all of the necessary data for your agent to act on your behalf, but likewise it centralizes everything a bad actor needs to exploit your accounts.
- Discoverability
- The tools are out there, but there isn’t a single good place to find the best MCP servers. Marketplaces are beginning to crop up, but the developers are still required to hunt out good sources of tools for themselves.
- Later on in the MCP user experience, it’s very easy to end up with enough servers and tools to overwhelm your LLM – leading to incorrect tools being used, and worse outcomes. When an LLM has the right tools for the job, it can execute more efficiently. When an LLM gets the wrong tools – or too many tools to decide, hallucinations spike while evals plummet.
- Trust:
- When the tools are run by LLMs on behalf of the developer, it’s critical to trust the publisher of MCP servers. The current MCP publisher landscape looks like a gold rush, and is therefore vulnerable to supply-chain attacks from untrusted authors.
Docker as an MCP Runtime
Docker is a tried and true runtime to stabilize the environment in which tools run. Instead of managing multiple Node or Python installations, using Dockerized MCP servers allows anyone with the Docker Engine to run MCP servers.
Docker provides sandboxed isolation for tools so that undesirable LLM behavior can’t damage the host configuration. The LLM has no access to the host filesystem for example, unless that MCP container is explicitly bound.
The MCP Gateway
In order for LLM’s to work autonomously, they need to be able to discover and run tools for themselves. This is nearly impossible using all of these MCP servers. Every time a new tool is added, a config file needs to be updated and the MCP client needs to be updated. The current workaround is to develop MCP servers which configure new MCP servers, but even this requires reloading. A much better approach is to simply use one MCP server: Docker. This MCP server acts as a gateway into a dynamic set of containerized tools. But how can tools be dynamic?
The MCP Catalog
A dynamic set of tools in one MCP server means that users can go somewhere to add or remove MCP tools without modifying any config. This is achieved through a simple UI in Docker Desktop to maintain a list of tools which the MCP gateway can serve out. Users gain the ability to configure their MCP clients use hundreds of Dockerized servers all by “connecting” to the gateway MCP server.
Much like Docker Hub, Docker MCP Catalog delivers a trusted, centralized hub to discover tools for developers. And for tool authors, that same hub becomes a critical distribution channel: a way to reach new users and ensure compatibility with platforms like Claude, Cursor, OpenAI, and VS Code.
Docker Secrets
Finally, in order to securely pass access tokens and other secrets around containers, we’ve developed a feature as part of Docker Desktop to manage secrets. When configured, secrets are only exposed to the MCP’s container process. That means the secret won’t appear even when inspecting the running container. Allowing secrets to be kept scoped tightly to the tools that need them means you no longer risk big data breaches leaving MCP config files around.