Back to writings

MCP Server Implementation: Connecting AI Agents to Enterprise Systems

8 min read

I've spent the last few months building MCP servers that actually connect to real enterprise systems. Not demos. Not proof-of-concepts. Systems that handle sensitive data, enforce access controls, and need to stay up 24/7.

Here's what I've learned about making it work.

Why MCP Matters for Enterprise Integration

Before MCP, developers often had to build custom connectors for each data source or tool, resulting in what Anthropic described as an "N×M" data integration problem. You'd build an integration for Salesforce here, a database connector there, and suddenly you're maintaining dozens of fragile point-to-point connections.

An MCP server acts as a translator that lets AI ask questions and act on external systems, without exposing raw data. More importantly, it gives you a single, standardized way to expose your enterprise capabilities to any AI client—Claude, ChatGPT, or your own internal agents.

By 2026, businesses want a single, consistent way to use their custom dependencies—whether RAG tools, web search utilities, API connectors, or internal automations—across platforms like GPT, n8n, LangChain-based apps, and internal enterprise systems.

The Three Critical Patterns

Pattern 1: Transport Layer Decisions

When you build an MCP server, your first decision is how to transport it. I've learned this shapes everything downstream.

Stdio (Local) works great for development and single-user scenarios. The AI agent runs on the same machine, talks to your server over stdin/stdout. Simple, isolated, minimal attack surface.

Streamable HTTP is what you want for enterprise. Streamable HTTP is best for remote, shared services and scale-out, but requires standard web hardening and production SRE practices.

The choice matters. I built a Supabase integration using stdio first—worked fine locally. But when we needed multiple agents accessing it simultaneously from different machines, we had to rebuild it as HTTP. That's a lesson I'd rather you learn from my mistakes.

Pattern 2: Security by Default

Access control is the foundation for any production-ready MCP deployment. MCP servers should enforce strict authentication and authorization to ensure AI tools only access data they are explicitly permitted to use.

Modern MCP implementations now standardize on OAuth 2.1 for HTTP-based transports, replacing custom authentication methods and basic API keys as of 2025. OAuth 2.1 improves token handling, scope enforcement, and session security, making it far better suited for enterprise MCP environments.

But here's the thing: just adding OAuth isn't enough. MCP servers MUST NOT accept any tokens that were not explicitly issued for the MCP server. This sounds obvious until you're debugging why an agent can access your database through the MCP server but not directly—because the token was meant for something else.

I've also learned to implement what I call "tool-level access control." Your MCP server might expose 20 different tools. Not every agent should be able to call every tool. Build a permission matrix early. It's easier to add capabilities than to lock things down later.

Pattern 3: Handling State and Idempotency

MCP servers should be stateless by default. Keep execution stateless for scale and resiliency. Use managed stores (cache, DB) with clear TTLs and PII handling; avoid hidden server memory.

This matters because your server might get restarted, scaled horizontally, or hit by traffic spikes. If state lives in memory, you lose it. If an agent retries a request and your server isn't idempotent, you might create duplicate records.

I learned this the hard way with a Vercel integration. An agent would create a deployment, the response would timeout, the agent would retry, and suddenly we'd have two deployments. Now every create/update operation requires an idempotency key. The agent provides it, the server checks it, and the same request called twice produces the same result.

Real-World Integration Patterns

Database Integration

I built an MCP server for a Supabase database that serves as the data layer for multiple agents. The pattern is straightforward:

  1. Define your tools (query, insert, update, delete) with strict schemas
  2. Validate every input—don't trust the agent's SQL or parameters
  3. Implement row-level security (RLS) at the database level, not the MCP server
  4. Return only the columns the agent needs

The RLS part is critical. Your MCP server is a policy enforcement layer, but the real security lives in your database. If someone compromises the MCP server, they still can't read data they shouldn't have access to.

API Gateway Pattern

For enterprises with multiple systems, I've found the gateway pattern works well. Instead of agents connecting directly to MCP servers, they connect to a central gateway that:

  • Routes requests to the right server
  • Enforces organization-wide policies
  • Logs all interactions for compliance
  • Manages credentials and secrets

Direct connections are simple; gateway patterns centralize authN/Z, routing, catalogs, and policy enforcement for many servers. Use an enterprise MCP Gateway when you need centralized security, control, and scale across many servers and tenants. The gateway becomes the single, policy-enforced ingress for agent access to organizational capabilities.

Legacy System Integration

Legacy systems are where MCP really shines. I've wrapped COBOL APIs, old Oracle databases, and mainframe systems with MCP servers. The pattern:

  1. Build a thin adapter layer that translates MCP requests to your legacy system's API
  2. Handle timeouts—legacy systems are slow, and agents will retry
  3. Implement circuit breakers so a failing legacy system doesn't crash your MCP server
  4. Cache aggressively where possible

The key insight: you're not rewriting your legacy system. You're giving it a modern interface.

Security Considerations You Can't Ignore

Security is the defining requirement in MCP's enterprise adoption curve. I've identified three specific risks that keep me up at night:

Confused Deputy Problem: Your MCP server has access to sensitive data. An agent makes a request. How do you know the request is legitimate? When an MCP server performs an action triggered by a user's request, there is a risk of a "confused deputy" problem. Ideally, the MCP server should execute this action on behalf of the user and with the user's permission.

Solution: Always propagate user identity through your system. Don't let the MCP server make decisions based on the agent's identity—use the end user's identity.

Data Exfiltration at Scale: The protocol simplifies connectivity to multiple data sources, enabling users to grant agents broad access to sensitive systems without necessarily understanding the security implications. A product manager might connect an agent to their data warehouse, Linear, Jira, email, and customer support system for daily productivity tasks—inadvertently creating an attack surface where malicious instructions injected into any one system can be leveraged to exfiltrate data from all connected systems.

Solution: Implement data loss prevention (DLP) at the gateway level. Monitor what data leaves your systems. Set rate limits per agent and per tool.

Prompt Injection Through Tools: An agent calls an MCP tool, gets data back, uses that data in a prompt. If that data contains malicious instructions, it can manipulate the agent.

Solution: Treat tool responses like untrusted input. Sanitize them. Don't let the agent see raw database content if you can help it—return structured, validated data instead.

Implementation Checklist

When I build an MCP server for production, I follow this checklist:

  1. Define your schema first - Use Zod or JSON Schema. Validate everything.
  2. Implement authentication - OAuth 2.1 for HTTP servers. Use short-lived tokens.
  3. Add rate limiting - Per-agent, per-tool. Protect your upstream systems.
  4. Make it stateless - Use external stores for state. Support idempotency keys.
  5. Log everything - Every tool call, every error, every auth failure. You'll need it for debugging and compliance.
  6. Test with real agents - Use Claude or your actual AI client. Demos are deceptive.
  7. Monitor in production - Track latency, error rates, and unusual access patterns.
  8. Plan for failure - What happens when your upstream system is down? Your MCP server should fail gracefully.

Where to Go From Here

If you're building MCP servers for enterprise, start with a single, well-scoped integration. Get the security patterns right. Then scale horizontally—add more servers, not more complexity to existing ones.

MCP servers succeed in the enterprise when they are treated as durable products: narrowly scoped, strongly governed, observable, and easy to evolve. Favor clarity, safety, and operability over breadth—then scale capabilities through catalogs and consistent patterns rather than bespoke implementations.

I've found that the teams who succeed with MCP are the ones who treat their servers like products, not scripts. They own them. They monitor them. They iterate on them based on how agents actually use them.

For deeper dives into specific patterns, check out my posts on building production-ready AI agents with Claude's MCP and MCP architecture patterns that scale. If you're comparing MCP with traditional integration approaches, I've also written about MCP vs traditional integration architecture decisions.

The difference between a working MCP server and a production one isn't complexity—it's security, governance, and how you handle the edge cases. Get those right, and everything else follows.

Have questions about implementing MCP for your enterprise systems? Get in touch—I'd love to help you think through the architecture.