← Back to Blog
·5 min read

Building MCP Servers with TypeScript: A Technical Overview

A developer-focused guide to building production MCP servers using the official TypeScript SDK, covering tool definitions, transports, authentication, and deployment patterns.

The MCP Architecture

At its core, an MCP server is a JSON-RPC service that responds to a specific set of methods defined by the protocol. The server declares its capabilities (tools, resources, prompts) and handles requests from AI clients.

The official @modelcontextprotocol/sdk package handles the protocol layer. You focus on defining tools and implementing their logic.

Setting Up

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({
  name: "my-product-server",
  version: "1.0.0",
});

That's your server. Now you define tools.

Defining Tools

Tools are the primary way AI interacts with your product. Each tool has a name, description, input schema, and handler function.

server.tool(
  "search_customers",
  "Search for customers by name, email, or company. Returns matching customer records.",
  {
    query: z.string().describe("Search term — name, email, or company"),
    limit: z.number().optional().default(10).describe("Max results to return"),
  },
  async ({ query, limit }) => {
    const customers = await db.customers.search(query, { limit });
    return {
      content: [{
        type: "text",
        text: JSON.stringify(customers, null, 2),
      }],
    };
  }
);

The description matters. AI uses the description to decide when to call this tool. Write it like you're explaining the tool to a smart colleague who's never used your product.

Parameter descriptions matter too. The AI reads these to understand what values to pass. Be specific about formats, constraints, and defaults.

Transport Options

MCP supports multiple transport mechanisms:

Stdio — The server communicates over standard input/output. Best for local tools that run on the user's machine (CLI tools, local file access). The AI client spawns the server as a subprocess.

Streamable HTTP — The server exposes a single HTTP endpoint. Best for remote servers hosted on the web. This is what you'd use for a SaaS MCP server deployed on Vercel, AWS, or similar.

import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

const transport = new StreamableHTTPServerTransport({
  sessionIdGenerator: undefined, // Stateless mode
});

await server.connect(transport);

SSE (Server-Sent Events) — A legacy transport that some older clients still use. Streamable HTTP is the modern replacement.

For most SaaS use cases, Streamable HTTP is the right choice. It works with serverless platforms, handles well at scale, and doesn't require persistent connections.

Authentication

Your MCP server needs to verify that the requesting user has access to the data they're asking for. Common patterns:

API Key in headers — The AI client passes an API key with each request. Your server validates it against your existing auth system.

OAuth — MCP supports OAuth flows for user-level authentication. The AI client handles the OAuth dance and passes tokens to your server.

Service-level access — For internal tools, the MCP server might use a service account that has access to all data, with the AI client specifying which user/account to query.

Error Handling

Tools should return errors gracefully so the AI can communicate them to the user:

server.tool(
  "get_order",
  "Look up an order by ID",
  { orderId: z.string() },
  async ({ orderId }) => {
    const order = await db.orders.findById(orderId);

    if (!order) {
      return {
        content: [{ type: "text", text: `Order ${orderId} not found.` }],
        isError: true,
      };
    }

    return {
      content: [{ type: "text", text: JSON.stringify(order, null, 2) }],
    };
  }
);

The isError: true flag tells the AI that something went wrong, so it can handle it appropriately in the conversation.

Deployment Patterns

Next.js API Route (Vercel)

Deploy your MCP server as a Next.js API route. One file, one endpoint:

// app/api/mcp/route.ts
export async function POST(request: Request) {
  const server = new McpServer({ name: "my-server", version: "1.0.0" });
  registerTools(server);

  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
  });
  await server.connect(transport);

  const body = await request.json();
  const response = await transport.handleRequest(body, request.headers);
  return new Response(response.body, { headers: response.headers });
}

This is the simplest deployment model. Serverless, auto-scaling, zero infrastructure to manage.

Standalone Node.js Server

For more control or if you need persistent state:

import express from "express";

const app = express();
app.use(express.json());

app.post("/mcp", async (req, res) => {
  // Handle MCP request
});

app.listen(3001);

Docker Container

For self-hosted deployments in a customer's infrastructure:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]

Testing

Test your MCP server with the MCP Inspector:

npx @anthropic-ai/mcp-inspector

This gives you an interactive UI to connect to your server, list tools, and make test calls. Essential for development.

You can also test programmatically with the MCP client SDK for integration tests.

Production Checklist

Before deploying your MCP server to production:

  • Rate limiting — Protect your APIs from excessive tool calls
  • Input validation — Zod schemas handle this, but double-check edge cases
  • Logging — Log every tool call for debugging and usage analytics
  • Monitoring — Alert on error rates and latency spikes
  • Documentation — Write clear descriptions for every tool and parameter
  • Versioning — Plan for how you'll evolve your tool surface without breaking existing clients

Getting Expert Help

Building an MCP server isn't just a coding problem — it's a design problem. Choosing the right tool surface, writing descriptions that AI understands, handling edge cases in multi-step workflows — these decisions determine whether your MCP server is actually useful or just technically functional.

We specialize in designing and building MCP servers for SaaS products. If you want to skip the learning curve and ship a production-quality MCP server, book a consultation.

Ready to make your product AI-accessible?

Book a consultation to discuss how an MCP server can work for your business.

Schedule a Consultation