1. What is MCP and Why It Matters
The Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how AI assistants connect to external data sources and tools. Released in late 2024, MCP addresses a fundamental limitation that has plagued LLM applications: the gap between what models know and what they can actually do with real-world data.
Think of MCP as "USB-C for AI applications." Just as USB-C provides a universal way to connect devices to peripherals, MCP provides a universal way to connect AI models to data sources, APIs, and tools. Before MCP, every AI integration required custom code, proprietary SDKs, and fragile prompt engineering. MCP changes this by establishing a standard protocol that any model or application can implement.
MCP separates the protocol from the implementation. This means you can build an MCP server once and use it with Claude, GPT-4, Gemini, or any other MCP-compatible client without modification.
The Problem MCP Solves
Modern LLMs are knowledge-rich but context-poor. They have vast training data but no access to:
- Your company's internal databases and APIs
- Real-time information (stock prices, weather, news)
- Private documents and knowledge bases
- Development environments (Git repos, CI/CD pipelines)
- External tools (calculators, code interpreters, search engines)
Traditional solutions like Retrieval-Augmented Generation (RAG) and function calling have limitations:
| Approach | Limitations | MCP Advantage |
|---|---|---|
| Prompt Injection | Context window limits, no real-time updates | Dynamic context, live data |
| Function Calling | Vendor-specific, requires client-side implementation | Standardized, server-side logic |
| Plugins | Platform-specific (ChatGPT plugins, etc.) | Open protocol, any platform |
| Custom APIs | High development overhead, maintenance burden | Reusable components, ecosystem |
Core Concepts
MCP is built around three fundamental primitives:
Resources are data sources that the model can read. These might be files, database queries, API responses, or computed values. Resources are identified by URIs and can be read, subscribed to, or listed.
Tools are functions that the model can call to perform actions. Unlike resources (which are passive), tools are active operations that can modify state, trigger workflows, or interact with external systems.
Prompts are pre-defined templates that help users accomplish specific tasks. They can include dynamic variables and can reference resources and tools.
2. How MCP Works: Architecture Deep Dive
The Protocol Stack
MCP uses JSON-RPC 2.0 as its transport protocol, running over stdio (for local processes) or HTTP/SSE (for remote servers). This choice provides several advantages:
- Simplicity: JSON-RPC is widely understood and easy to debug
- Bidirectional: Supports both requests and server-pushed notifications
- Transport agnostic: Works over pipes, sockets, or HTTP
The protocol has three layers:
- Transport Layer: Handles connection establishment, message framing, and error handling
- Protocol Layer: Defines the JSON-RPC methods for initialization, capability negotiation, and lifecycle management
- Application Layer: Implements resources, tools, and prompts specific to each server
Connection Lifecycle
Every MCP connection follows a strict lifecycle:
┌─────────────┐ Initialize ┌─────────────┐
│ Client │ ──────────────────> │ Server │
│ (Host) │ │ (MCP App) │
└─────────────┘ └─────────────┘
│ │
│ <─────────────────────────────────│
│ Initialize Result │
│ (capabilities, protocol) │
│ │
│ ─────────────────────────────────>│
│ Initialized Notification │
│ │
│======== OPERATIONAL PHASE ========│
│ │
│ resources/read, tools/call, etc. │
│ │
│ <─────────────────────────────────│
│ notifications (optional) │
│ │
│ ─────────────────────────────────>│
│ Shutdown │
│ │
Capability Negotiation
During initialization, the client and server exchange capability declarations:
// Client capabilities
{
"protocolVersion": "2024-11-05",
"capabilities": {
"roots": { "listChanged": true },
"sampling": {}
},
"clientInfo": {
"name": "claude-desktop",
"version": "1.0.0"
}
}
// Server capabilities
{
"protocolVersion": "2024-11-05",
"capabilities": {
"resources": {
"subscribe": true,
"listChanged": true
},
"tools": { "listChanged": true },
"prompts": { "listChanged": true }
},
"serverInfo": {
"name": "postgres-mcp-server",
"version": "1.2.0"
}
}
This negotiation ensures both parties understand what features are available before any operational messages are exchanged.
Message Types
MCP defines several message categories:
Lifecycle Messages: initialize, initialized, shutdown
Resource Messages: resources/list, resources/read, resources/subscribe, resources/unsubscribe
Tool Messages: tools/list, tools/call
Prompt Messages: prompts/list, prompts/get
Notification Messages: notifications/resources/updated, notifications/tools/list_changed
3. Building MCP Servers: Step-by-Step
Getting Started with the TypeScript SDK
Anthropic provides official SDKs for TypeScript and Python. Let's build a PostgreSQL MCP server that exposes database tables as resources and provides query tools.
First, initialize your project:
mkdir mcp-postgres-server
cd mcp-postgres-server
npm init -y
npm install @modelcontextprotocol/sdk pg zod
npm install -D @types/node typescript
Create your server implementation:
// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListResourcesRequestSchema,
ListToolsRequestSchema,
ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { Pool } from "pg";
import { z } from "zod";
// Database connection
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
// Create MCP server
const server = new Server(
{
name: "postgres-mcp-server",
version: "1.0.0",
},
{
capabilities: {
resources: {},
tools: {},
},
}
);
// List available resources (database tables)
server.setRequestHandler(ListResourcesRequestSchema, async () => {
const result = await pool.query(`
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
`);
return {
resources: result.rows.map(row => ({
uri: `postgres:///${row.table_name}`,
name: row.table_name,
mimeType: "application/json",
description: `PostgreSQL table: ${row.table_name}`
}))
};
});
// Read a resource (table data)
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const tableName = request.params.uri.replace("postgres:///", "");
// Validate table name to prevent injection
const validTable = /^[a-zA-Z_][a-zA-Z0-9_]*$/.test(tableName);
if (!validTable) {
throw new Error("Invalid table name");
}
const result = await pool.query(
`SELECT * FROM "${tableName}" LIMIT 100`
);
return {
contents: [{
uri: request.params.uri,
mimeType: "application/json",
text: JSON.stringify(result.rows, null, 2)
}]
};
});
// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "query",
description: "Execute a read-only SQL query",
inputSchema: {
type: "object",
properties: {
sql: {
type: "string",
description: "The SQL query to execute (SELECT only)"
}
},
required: ["sql"]
}
},
{
name: "explain",
description: "Get query execution plan",
inputSchema: {
type: "object",
properties: {
sql: {
type: "string",
description: "The SQL query to explain"
}
},
required: ["sql"]
}
}
]
};
});
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "query") {
const sql = args.sql as string;
// Security: Only allow SELECT statements
if (!sql.trim().toLowerCase().startsWith("select")) {
throw new Error("Only SELECT queries are allowed");
}
const result = await pool.query(sql);
return {
content: [{
type: "text",
text: JSON.stringify(result.rows, null, 2)
}]
};
}
if (name === "explain") {
const sql = args.sql as string;
const result = await pool.query(`EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) ${sql}`);
return {
content: [{
type: "text",
text: JSON.stringify(result.rows[0]["QUERY PLAN"], null, 2)
}]
};
}
throw new Error(`Unknown tool: ${name}`);
});
// Start server
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("PostgreSQL MCP Server running on stdio");
}
main().catch(console.error);
Building with Python
For Python developers, the SDK provides similar capabilities:
# server.py
import asyncio
import os
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Resource, Tool, TextContent
import psycopg2
from psycopg2.extras import RealDictCursor
app = Server("postgres-mcp-server")
def get_db_connection():
return psycopg2.connect(
os.environ["DATABASE_URL"],
cursor_factory=RealDictCursor
)
@app.list_resources()
async def list_resources() -> list[Resource]:
conn = get_db_connection()
try:
with conn.cursor() as cur:
cur.execute("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
""")
tables = cur.fetchall()
return [
Resource(
uri=f"postgres:///{table['table_name']}",
name=table['table_name'],
mimeType="application/json",
description=f"PostgreSQL table: {table['table_name']}"
)
for table in tables
]
finally:
conn.close()
@app.read_resource()
async def read_resource(uri: str) -> str:
table_name = uri.replace("postgres:///", "")
conn = get_db_connection()
try:
with conn.cursor() as cur:
cur.execute(
"SELECT * FROM %s LIMIT 100",
(table_name,)
)
rows = cur.fetchall()
import json
return json.dumps(rows, default=str, indent=2)
finally:
conn.close()
@app.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="query",
description="Execute a read-only SQL query",
inputSchema={
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "The SQL query to execute (SELECT only)"
}
},
"required": ["sql"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "query":
sql = arguments["sql"]
if not sql.strip().lower().startswith("select"):
raise ValueError("Only SELECT queries are allowed")
conn = get_db_connection()
try:
with conn.cursor() as cur:
cur.execute(sql)
rows = cur.fetchall()
import json
return [TextContent(
type="text",
text=json.dumps(rows, default=str, indent=2)
)]
finally:
conn.close()
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(
read_stream,
write_stream,
app.create_initialization_options()
)
if __name__ == "__main__":
asyncio.run(main())
Configuration and Deployment
To use your MCP server with Claude Desktop, create a configuration file:
# ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
# %APPDATA%\Claude\claude_desktop_config.json (Windows)
# ~/.config/Claude/claude_desktop_config.json (Linux)
{
"mcpServers": {
"postgres": {
"command": "node",
"args": ["/path/to/mcp-postgres-server/dist/index.js"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost/mydb"
}
}
}
}
Always validate and sanitize inputs in MCP servers. The example above restricts queries to SELECT statements only. For production use, implement proper authentication, rate limiting, and query whitelisting.
4. MCP vs Function Calling vs Plugins
Understanding when to use MCP versus alternatives is crucial for architectural decisions:
| Feature | MCP | Function Calling | Plugins |
|---|---|---|---|
| Standardization | Open protocol | Vendor-specific | Platform-specific |
| Transport | stdio, HTTP, SSE | HTTP API | HTTP API |
| Server Location | Local or remote | Client-side | Remote |
| State Management | Stateful connections | Stateless | Stateless |
| Discovery | Dynamic | Static schema | Static manifest |
| Multi-Client | Yes | No | No |
When to Use Each Approach
Use MCP when:
- You want to build reusable integrations that work across multiple AI clients
- You need bidirectional communication (server can push updates)
- You want to expose resources that can be read and subscribed to
- You're building infrastructure that should outlive any single AI model
Use Function Calling when:
- You need tight integration with a specific model's API
- You're building a simple, stateless integration
- You want the model to handle all orchestration logic
Use Plugins when:
- You're targeting a specific platform (ChatGPT, etc.)
- You need deep UI integration with the host application
- You want to leverage platform-specific distribution
5. Real-World Use Cases
Database Integration
MCP servers can expose database schemas as resources and provide safe query tools. This enables AI assistants to:
- Explore database structure without writing SQL
- Generate queries based on natural language questions
- Analyze data patterns and generate reports
- Validate query results against expected schemas
Git Repository Management
A Git MCP server can expose:
- Resources: Commits, branches, diffs, file contents at specific revisions
- Tools: Create branch, commit changes, run git commands
- Prompts: "Generate a commit message for these changes"
API Integration
MCP servers can wrap any REST or GraphQL API:
// Example: Stripe MCP Server
{
"tools": [
{
"name": "create_customer",
"description": "Create a new Stripe customer",
"inputSchema": { ... }
},
{
"name": "list_invoices",
"description": "List customer invoices",
"inputSchema": { ... }
}
],
"resources": [
{
"uri": "stripe://customers",
"name": "Customers",
"description": "List of all Stripe customers"
}
]
}
Development Environment
IDEs can use MCP to provide AI assistants with:
- Access to project files and structure
- Build and test execution capabilities
- Integration with version control
- Real-time error and diagnostic information
6. Security Considerations
Input Validation
Always validate inputs before processing:
// Use Zod for runtime validation
const QuerySchema = z.object({
sql: z.string()
.min(1)
.max(10000)
.refine(
sql => sql.trim().toLowerCase().startsWith("select"),
"Only SELECT queries are allowed"
)
});
const result = QuerySchema.safeParse(args);
if (!result.success) {
throw new Error("Invalid input: " + result.error.message);
}
Authentication and Authorization
MCP servers should implement proper access controls:
- Use environment variables for secrets, never hardcode credentials
- Implement rate limiting to prevent abuse
- Use least-privilege database connections
- Consider OAuth for user-specific resources
Sandboxing
For untrusted MCP servers:
- Run in isolated containers
- Limit network access with firewalls
- Use read-only filesystems where possible
- Monitor resource usage
MCP servers run with the permissions of the host process. A malicious MCP server could exfiltrate data, modify files, or execute arbitrary code. Only install MCP servers from trusted sources and review their code before use.
7. The Future of MCP
Ecosystem Growth
Since its release, MCP has seen rapid adoption:
- Official servers: Anthropic maintains servers for PostgreSQL, SQLite, Git, and more
- Community servers: Hundreds of community-built servers for popular services
- Client support: Claude Desktop, Cursor, and other AI tools adding MCP support
Protocol Evolution
The MCP specification is actively evolving. Areas of focus include:
- Remote servers: Better support for HTTP/SSE transport for cloud-hosted MCP servers
- Authentication: Standardized auth flows for multi-user scenarios
- Streaming: Support for streaming responses from tools
- Composability: MCP servers that can call other MCP servers
Industry Impact
MCP represents a shift in how we think about AI integration:
- Decoupling: AI capabilities are no longer tied to specific models or platforms
- Composability: Complex AI applications can be built by combining simple, focused MCP servers
- Openness: An open protocol prevents vendor lock-in and encourages innovation
Getting Involved
To stay current with MCP development:
- Follow the official GitHub organization
- Join the community Discord for discussions and support
- Contribute to the specification and SDKs
- Build and share your own MCP servers
MCP is more than a protocol—it's a fundamental shift in how AI systems integrate with the world. By standardizing the interface between models and tools, MCP enables a new generation of AI applications that are more capable, more secure, and more portable than ever before. Whether you're building a simple database connector or a complex multi-system integration, MCP provides the foundation you need.