AI & Development 7 min read

MCP Protocol Guide | Building AI-Native Developer Tools with Model Context Protocol

Learn how the Model Context Protocol (MCP) is transforming AI-assisted development. Real-world examples of building MCP servers for code search, secrets, and knowledge management.

T

thelacanians

What MCP Changes

The Model Context Protocol, introduced by Anthropic, solves a fundamental problem in AI-assisted development: how do you give an AI model structured access to your tools, data, and services without building custom integrations for every combination?

Before MCP, connecting Claude to your codebase meant copy-pasting files into the chat. Connecting it to your database meant manually running queries and pasting results. Connecting it to your deployment pipeline meant… well, you didn’t.

MCP provides a standard protocol for AI models to discover and invoke tools, access resources, and maintain context across interactions. It is to AI tools what REST is to web APIs: a shared convention that lets everything interoperate.

We have been building MCP servers since the protocol’s early days, and we now use them in every development workflow. This post covers what we have built, what we have learned, and how you can build your own.

The Architecture

An MCP server is a lightweight process that exposes capabilities to an AI model through a standardized interface. The model discovers what tools are available, understands their parameters through JSON Schema definitions, and invokes them as needed.

┌─────────────┐     MCP Protocol     ┌─────────────────┐
│  AI Model   │◄────────────────────►│   MCP Server    │
│  (Claude)   │  tools / resources   │  (your code)    │
└─────────────┘                       └────────┬────────┘

                                      ┌────────▼────────┐
                                      │  Your Services  │
                                      │  - Database     │
                                      │  - File System  │
                                      │  - APIs         │
                                      └─────────────────┘

The key insight is that the MCP server is not an agent. It does not make decisions. It exposes capabilities and lets the AI model decide when and how to use them. This separation of concerns is what makes the protocol powerful and composable.

What We Built

Our first MCP server, and arguably our most impactful. vecgrep indexes your codebase using vector embeddings and exposes semantic search through MCP.

Instead of searching for exact string matches, you search for concepts:

Query: "where is user authentication handled"
Results:
  src/middleware/auth.ts:15 - JWT verification middleware
  src/routes/login.ts:42 - Login endpoint with bcrypt comparison
  src/lib/session.ts:8 - Session management and token refresh

This changes how AI models navigate code. Instead of requiring exact file paths or function names, the model can express intent and get relevant results. It is particularly valuable in large codebases where even experienced developers do not know where everything lives.

The MCP integration means Claude can search the codebase autonomously during a conversation. When you ask “how does our billing work?”, it searches, reads the relevant files, and gives you an informed answer.

tinyvault: Local Secret Management

tinyvault is a local-first secret manager exposed through MCP. It solves a specific problem: how do you let an AI model help with configuration and deployment without exposing secrets in the conversation?

// The AI can request secrets by name without seeing the values
// tinyvault returns them to the runtime, not the conversation

// MCP tool: vault_get
// Input: { key: "STRIPE_SECRET_KEY", environment: "staging" }
// Result: Secret loaded into environment, value not displayed

// MCP tool: vault_list
// Input: { environment: "production" }
// Result: ["DATABASE_URL", "STRIPE_SECRET_KEY", "RESEND_API_KEY"]

The AI knows which secrets exist and can reference them in configuration files and deployment scripts, but never sees the actual values. This is a meaningful security improvement over the common pattern of pasting .env files into AI chat windows.

noted: Knowledge Base

noted is a structured knowledge base that maintains project context across AI sessions. It stores technical decisions, API documentation, meeting notes, and architectural decisions in a searchable format.

The MCP integration lets Claude access this context naturally. When you ask about a decision made three months ago, the AI can look it up rather than hallucinating an answer or asking you to repeat yourself.

file.cheap: File Processing

Our file conversion and processing tool, also MCP-enabled. It handles the mundane but frequent task of converting between formats, extracting text from PDFs, and processing uploads — all accessible to the AI during development.

Building Your Own MCP Server

The protocol is straightforward to implement. Here is a minimal MCP server in TypeScript:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const server = new McpServer({
  name: 'my-tool',
  version: '1.0.0',
});

// Define a tool
server.tool(
  'search_docs',
  'Search the project documentation',
  {
    query: z.string().describe('Search query'),
    limit: z.number().optional().describe('Max results'),
  },
  async ({ query, limit = 5 }) => {
    const results = await searchDocumentation(query, limit);
    return {
      content: [{
        type: 'text',
        text: JSON.stringify(results, null, 2),
      }],
    };
  }
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

That is the entire boilerplate. The MCP SDK handles protocol negotiation, tool discovery, and message framing. You focus on implementing the actual functionality.

Design Principles

After building several MCP servers, we have developed strong opinions about what makes them effective:

One server, one concern. Do not build a mega-server that handles code search, deployment, and database management. Build three small servers. This matches how AI models reason about tools — they select from a menu of specific capabilities.

Return structured data. AI models work better with JSON than with formatted text. Return structured results and let the model decide how to present them:

// Good: structured, parseable
return {
  content: [{
    type: 'text',
    text: JSON.stringify({
      file: 'src/auth.ts',
      line: 42,
      snippet: 'export function verifyToken(token: string)',
      relevance: 0.94,
    }),
  }],
};

// Avoid: formatted for humans, harder for AI to parse
return {
  content: [{
    type: 'text',
    text: '📁 src/auth.ts (line 42) - verifyToken function [94% match]',
  }],
};

Fail gracefully with useful errors. When a tool invocation fails, return an error message that helps the AI model recover. “File not found: src/auth.ts” is better than a stack trace. “No results for query ‘authentication’ — try broader terms” is better than an empty array.

Keep tools focused. A tool that accepts 15 parameters is hard for the AI to use correctly. Break it into multiple tools with clear, specific purposes. search_by_content and search_by_filename are better than search with a mode parameter.

The MCP Ecosystem

The MCP ecosystem is growing rapidly. Beyond custom servers, there are community-built servers for:

  • Databases: PostgreSQL, SQLite, MongoDB with query tools
  • Cloud providers: AWS, GCP, Vercel with deployment and monitoring tools
  • Development tools: Git operations, GitHub issues and PRs, CI/CD pipelines
  • Communication: Slack, email, calendar integration

The pattern is consistent: take a tool developers already use, wrap it in an MCP server, and suddenly the AI model can use it too. The compound effect is significant. When your AI assistant can search code, check deployment status, read documentation, and query the database all within a single conversation, the productivity gain is multiplicative.

Where This Goes Next

MCP is still early. The protocol will evolve, the ecosystem will mature, and the patterns we use today will be refined. But the fundamental direction is clear: AI models will interact with developer tools through standardized protocols rather than ad-hoc integrations.

The teams that invest in building MCP-enabled toolchains now will have a significant productivity advantage over those who wait. Not because the protocol is magic, but because it enables a workflow where the AI model is a genuine participant in the development process rather than a glorified autocomplete.

We are open-sourcing all of our MCP servers. If you are building your own, we want to hear about it. The best tools in this space will come from developers solving their own problems and sharing the solutions.