Back to writings

MCP Protocol Deep Dive: Connecting AI Agents to External Systems

10 min read

I've spent the last few months building production AI agents, and I keep coming back to the same insight: the hardest part isn't the model. It's connecting it to everything it needs to actually work.

That's where MCP comes in.

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 to standardize the way AI systems integrate and share data with external tools, systems, and data sources.

But here's what matters: it's not just another protocol. It's the difference between building a demo and building something that scales.

Let me walk you through what I've learned building with MCP, and show you how to actually connect your AI agents to the systems that matter.

What MCP Actually Solves

Before MCP, connecting an AI agent to an external system meant writing custom integration code. Every time you wanted Claude to talk to a new database, API, or service, you had to build a fresh connector. Test it. Maintain it. Hope nothing broke when things updated.

Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.

Think of MCP like a universal translator for your tech stack.

It doesn't matter whether you're using Google Drive, Salesforce, or a custom legacy system — MCP creates a standardized language that AI agents understand.

The real win?

Organizations implementing the Model Context Protocol report 40-60% faster agent deployment times.

How MCP Works: The Architecture

MCP operates on a simple client-server model.

MCP is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. Developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.

Here's what that looks like in practice:

  1. Claude acts as the client - It sends requests asking "what tools do you have?" and "execute this tool"
  2. Your MCP server responds - It exposes the tools (functions) Claude can call
  3. Tools execute against your systems - Database queries, API calls, file operations, whatever you need

The protocol uses JSON-RPC 2.0 messages to establish communication between clients and servers. MCP standardizes how to integrate additional context and tools into the ecosystem of AI applications.

The beauty is that this works the same way whether you're connecting to Postgres, Salesforce, or a custom internal system. The protocol doesn't care. It just moves data and executes functions.

Building Your First MCP Server

Let me show you a practical example. Here's a minimal MCP server that exposes a database query tool:

import Anthropic from "@anthropic-ai/sdk";
import {
  Server,
  StdioServerTransport,
} from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server({
  name: "database-server",
  version: "1.0.0",
});

// Define what tools Claude can use
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "query_customers",
        description: "Query customer data from the database",
        inputSchema: {
          type: "object",
          properties: {
            limit: {
              type: "number",
              description: "Number of results to return",
            },
            status: {
              type: "string",
              enum: ["active", "inactive", "pending"],
              description: "Filter by customer status",
            },
          },
          required: ["limit"],
        },
      },
    ],
  };
});

// Handle tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "query_customers") {
    const { limit, status } = request.params.arguments;

    // Your actual database query here
    const results = await queryDatabase(
      `SELECT * FROM customers WHERE status = ? LIMIT ?`,
      [status || "active", limit]
    );

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(results),
        },
      ],
    };
  }

  return {
    content: [
      {
        type: "text",
        text: "Tool not found",
      },
    ],
    isError: true,
  };
});

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

This is the foundation. You define what tools Claude can use, then handle the execution.

The protocol was released with software development kits (SDKs) in programming languages including Python, TypeScript, C# and Java.

Authentication: The Critical Part

Here's where most MCP implementations fail. You can't just expose your database to the internet and hope for the best.

Many cloud-based MCP servers require authentication. Claude Code supports OAuth 2.0 for secure connections.

I've found the most practical pattern is OAuth 2.0 with dynamic client registration. Here's why: it lets Claude authenticate itself without you managing static credentials.

When Claude CLI connects to your MCP server, it first receives a 401 response with a WWW-Authenticate header pointing to your OAuth metadata. The CLI then discovers your authorization server, registers itself as a client using Dynamic Client Registration, opens a browser for user authorization, and exchanges the authorization code for access tokens.

The flow looks like this:

  1. Claude requests your MCP server
  2. Server returns 401 with OAuth metadata endpoint
  3. Claude discovers your OAuth provider (Auth0, PropelAuth, etc.)
  4. Claude dynamically registers as an OAuth client
  5. User authorizes in their browser
  6. Claude gets an access token
  7. Claude includes token in subsequent requests

For a production setup, I recommend:

  • Use OAuth 2.1 with PKCE - Handles the auth code flow securely
  • Implement dynamic client registration - Claude registers itself automatically
  • Scope your permissions - "read:customers", "write:orders", etc.
  • Validate tokens on every request - Don't trust the client

Secure Claude MCP by implementing proper authentication (OAuth or API keys), using HTTPS for all connections, restricting file system access to specific directories, and implementing permission checks for sensitive operations.

Connecting to Real Systems

Now let's talk about what you actually connect MCP to. I've seen three patterns that work:

1. Databases

The most common use case.

Pre-built MCP servers are available for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

For Postgres, your MCP server becomes a query interface. Claude can ask natural language questions, and your server translates them to SQL. You control what tables Claude sees, what operations are allowed, all through tool definitions.

2. APIs

MCP lets your AI talk to APIs the same way a developer might. Want it to fetch a GitHub issue? Done. Create a pull request? Easy. Update a Notion doc or send a Slack message? No problem.

The pattern: map API endpoints to MCP tools. Each endpoint becomes a callable function with proper input validation.

3. Custom Tools

This is where I've seen the most interesting applications. You can expose any code as a tool:

  • Document processing pipelines
  • Image generation
  • Complex business logic
  • Real-time data aggregation

Here's a practical example with a search tool:

{
  name: "search_documents",
  description: "Search your internal documentation",
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Search query",
      },
      limit: {
        type: "number",
        description: "Max results (1-50)",
      },
    },
    required: ["query"],
  },
}

Claude calls this, your server runs the search (using embeddings, full-text search, whatever), and returns results. Claude then uses those results to answer the user's question.

Integration Patterns That Actually Work

I've found a few patterns that consistently work in production:

Pattern 1: Read-Only Access

Start here. Your MCP server only reads data. No writes, no side effects. This is safe to test with.

RAG is a one-way approach that retrieves relevant information and feeds it into the model before generating a response.

MCP can do this, but also more.

Pattern 2: Controlled Writes

Once you're comfortable, add write operations. But be strict about it:

  • Only expose specific operations (create order, send email, update status)
  • Require explicit confirmation for sensitive actions
  • Log everything
  • Implement rate limits

Pattern 3: Multi-Step Workflows

This is where MCP shines.

Using MCP, you can build AI agents that automate entire workflows. For example, your AI could pull customer data from your platform to your CRM, draft and send an email, get the reply, check it, draft and send the follow-up without you lifting a finger.

Your agent chains multiple tool calls together, maintaining context across steps.

Security Considerations

In April 2025, security researchers released an analysis that concluded there are multiple outstanding security issues with MCP, including prompt injection, tool permissions that allow for combining tools to exfiltrate data, and lookalike tools that can silently replace trusted ones.

This isn't a deal-breaker. It means you need to be thoughtful:

  1. Validate all inputs - Treat Claude's requests like user input
  2. Scope permissions tightly - Each tool should do one thing
  3. Audit tool combinations - Think about what happens if Claude chains tools together
  4. Use sandboxing - Run sensitive operations in isolated environments
  5. Monitor execution - Log all tool calls and results

Deploying to Production

Here's what I've learned about getting MCP to production:

  1. Start with local testing - Use Claude Desktop with a local MCP server
  2. Add authentication early - Don't wait until the last minute
  3. Use a gateway -

Azure API Management (APIM) as your OAuth 2.0 gateway, powered by Microsoft Entra ID

is one approach. Other options include Cloudflare, Kong, or custom middleware. 4. Monitor and log - You need visibility into what Claude is doing 5. Version your tools - As your integrations evolve, maintain backwards compatibility

For more on production patterns, check out Building Production-Ready AI Agent Swarms: From MCP to Multi-Agent Orchestration.

MCP vs Traditional APIs

People often ask: why not just use REST APIs directly?

MCP provides a unified, plug-and-play interface that significantly reduces integration complexity. It eliminates the need for custom API development while offering stronger governance, observability, and security than traditional integration approaches.

The key difference: MCP is designed for AI. It handles tool discovery, authentication, context passing, and error handling in ways REST APIs don't.

For a deeper comparison, see Anthropic's MCP vs Traditional Integration Patterns: The Architecture Decision Guide.

What's Actually Happening in 2026

MCP has become the de facto protocol for connecting AI systems to real-world data and tools. OpenAI, Google DeepMind, Microsoft, and thousands of developers building production agents have all adopted it.

The ecosystem is moving fast.

November 2025 brought major updates: asynchronous operations, statelessness, server identity, and an official community-driven registry for discovering MCP servers.

97 million monthly SDK downloads across Python and TypeScript. Over 10,000 active servers. First-class client support in Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code.

This isn't a niche thing anymore. It's infrastructure.

The Implementation Checklist

Before you build your MCP server, make sure you have:

  1. Clear tool definitions - What can Claude actually do?
  2. Authentication strategy - OAuth 2.0 with dynamic client registration
  3. Input validation - Validate everything Claude sends
  4. Error handling - What happens when things fail?
  5. Logging and monitoring - You need visibility
  6. Rate limiting - Prevent abuse
  7. Documentation - Document your tools clearly

Start with one tool. Get it working. Then add more.

Moving Forward

In 2026, the teams that succeed will not just have better models. They will have better context contracts, better workflows, and clearer boundaries.

MCP is the foundation for that. It's not magic.

It's not magic. It's plumbing. But great plumbing lets you build great buildings.

The agents that matter won't be the ones with the best prompts. They'll be the ones with the best integrations. MCP makes building those integrations practical.

For more on building agents that work, check out Anthropic's MCP Protocol: The Game-Changer Making Claude AI Agents Actually Useful and MCP vs LangChain vs Semantic Kernel: AI Agent Framework Decision Guide.

Ready to build? Start with Building Production-Ready MCP Servers: From Architecture to Deployment.

Or get in touch if you want to discuss implementation patterns for your specific use case.