Skip to content
AI/ML Engineering

Build Your First MCP Server in TypeScript

Step-by-step tutorial to build an MCP server in TypeScript with @modelcontextprotocol/sdk and Zod. Three tools, stdio transport, Inspector debugging, Claude Desktop/Cursor integration, and npm publish.

A
Abhishek Patel16 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Build Your First MCP Server in TypeScript
Build Your First MCP Server in TypeScript

From Zero to Published MCP Server in One Afternoon

Building a Model Context Protocol server used to mean reading a 60-page spec and hand-rolling JSON-RPC framing. Not anymore. The official @modelcontextprotocol/sdk for TypeScript does the protocol work for you — you write tools, resources, and prompts, and the SDK handles negotiation, transport, and message framing. I've shipped four MCP servers to npm in the last six months, and the path from npm init to "working inside Claude Desktop" is under 30 minutes if you know the moves.

This is the tutorial I wish existed when I started. We're building a GitHub issues server with three tools (list, create, comment), Zod validation, stdio and HTTP transports, Claude Desktop + Cursor + Claude Code integration, MCP Inspector debugging, and an npm publish step. If you want the conceptual groundwork first, the MCP protocol overview covers the architecture and primitives — this piece assumes you've read it (or don't care and just want code).

Last updated: April 2026 — verified @modelcontextprotocol/sdk 1.19, Node 22 LTS, Claude Desktop 1.0.43 config path, and npm publish flow.

What You'll Build

Definition: An MCP server is a small Node process that exposes capabilities (tools, resources, prompts) to AI hosts like Claude Desktop or Cursor over JSON-RPC. The host calls your tools when the model decides they're needed; you return structured results the model reads into its context window.

By the end, you'll have an npm package called mcp-github-issues that any MCP client can install with one line. It exposes:

  • Tool list_issues — list open or closed issues for a repo, filtered by label
  • Tool create_issue — create a new issue with title, body, labels
  • Tool comment_issue — post a comment on an existing issue
  • Resource github://<owner>/<repo>/README — fetch the README for repo context
  • Prompt triage_issues — structured triage workflow template

The edge-case handling — retry-on-429, secondary rate limits, streaming large comment threads — is the kind of thing I send to the newsletter. What's below is the foundation that keeps working once you bolt those on.

Step 1: Scaffold the Project

Start with a clean directory. MCP servers are small by design — you don't need a monorepo, a build framework, or anything fancy. One TypeScript file compiled to one JS entry point is enough for most servers.

mkdir mcp-github-issues && cd mcp-github-issues
npm init -y
npm install @modelcontextprotocol/sdk zod @octokit/rest
npm install -D typescript @types/node tsx
npx tsc --init --target es2022 --module node16 --moduleResolution node16 \
  --outDir dist --rootDir src --declaration --strict

Three dependencies do the heavy lifting. The official TypeScript SDK handles protocol mechanics. Zod gives you runtime schema validation that the SDK also serializes to JSON Schema for the wire format. Octokit is the GitHub client — swap it for whatever API you're wrapping.

Open package.json and set the module type and Node engine:

{
  "name": "mcp-github-issues",
  "version": "0.1.0",
  "description": "MCP server for GitHub Issues",
  "type": "module",
  "bin": {
    "mcp-github-issues": "dist/index.js"
  },
  "files": ["dist"],
  "engines": {
    "node": ">=20.0.0"
  },
  "scripts": {
    "build": "tsc",
    "dev": "tsx src/index.ts",
    "inspector": "npx @modelcontextprotocol/inspector tsx src/index.ts"
  }
}

Two lines matter more than the rest. "type": "module" turns the whole package into ESM, which the MCP SDK expects. The bin entry is what makes npx mcp-github-issues work after publish — that's how Claude Desktop will launch your server. Remember the file path dist/index.js; we come back to it in the publish step.

Step 2: Implement the Tools Server

Create src/index.ts. This is the whole server — transport setup, three tools, one resource, one prompt.

#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { Octokit } from "@octokit/rest";
import { z } from "zod";

const token = process.env.GITHUB_TOKEN;
if (!token) {
  console.error("GITHUB_TOKEN environment variable required");
  process.exit(1);
}

const octokit = new Octokit({ auth: token });

const server = new McpServer({
  name: "github-issues",
  version: "0.1.0",
});

// Tool 1: list_issues
server.tool(
  "list_issues",
  "List issues in a GitHub repository. Filter by state (open/closed) and labels.",
  {
    owner: z.string().describe("Repository owner (user or organization)"),
    repo: z.string().describe("Repository name"),
    state: z.enum(["open", "closed", "all"]).default("open"),
    labels: z.array(z.string()).optional().describe("Filter by label names"),
    per_page: z.number().int().min(1).max(100).default(30),
  },
  async ({ owner, repo, state, labels, per_page }) => {
    const { data } = await octokit.issues.listForRepo({
      owner,
      repo,
      state,
      labels: labels?.join(","),
      per_page,
    });
    const summary = data
      .filter((i) => !i.pull_request) // filter out PRs
      .map((i) => ({
        number: i.number,
        title: i.title,
        state: i.state,
        labels: i.labels.map((l) => (typeof l === "string" ? l : l.name)),
        url: i.html_url,
      }));
    return {
      content: [{ type: "text", text: JSON.stringify(summary, null, 2) }],
    };
  }
);

// Tool 2: create_issue
server.tool(
  "create_issue",
  "Create a new issue in a GitHub repository.",
  {
    owner: z.string(),
    repo: z.string(),
    title: z.string().min(1),
    body: z.string().optional(),
    labels: z.array(z.string()).optional(),
  },
  async ({ owner, repo, title, body, labels }) => {
    const { data } = await octokit.issues.create({
      owner,
      repo,
      title,
      body,
      labels,
    });
    return {
      content: [
        {
          type: "text",
          text: `Created issue #${data.number}: ${data.html_url}`,
        },
      ],
    };
  }
);

// Tool 3: comment_issue
server.tool(
  "comment_issue",
  "Post a comment on an existing GitHub issue.",
  {
    owner: z.string(),
    repo: z.string(),
    issue_number: z.number().int().positive(),
    body: z.string().min(1),
  },
  async ({ owner, repo, issue_number, body }) => {
    const { data } = await octokit.issues.createComment({
      owner,
      repo,
      issue_number,
      body,
    });
    return {
      content: [
        { type: "text", text: `Comment posted: ${data.html_url}` },
      ],
    };
  }
);

// Resource: README for a repo
server.resource(
  "readme",
  new URL("github://owner/repo/README"),
  { description: "README contents for a GitHub repository" },
  async (uri) => {
    const [, owner, repo] = uri.pathname.split("/");
    const { data } = await octokit.repos.getReadme({ owner, repo });
    const text = Buffer.from(data.content, "base64").toString("utf8");
    return {
      contents: [{ uri: uri.href, mimeType: "text/markdown", text }],
    };
  }
);

// Prompt: triage workflow
server.prompt(
  "triage_issues",
  "Triage open issues for a repo by priority and assignability.",
  { owner: z.string(), repo: z.string() },
  ({ owner, repo }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `List the open issues in ${owner}/${repo}. For each, suggest priority (P0/P1/P2/P3), estimated complexity (S/M/L), and whether it can be assigned to a junior engineer. Return a markdown table.`,
        },
      },
    ],
  })
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("mcp-github-issues running on stdio");
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});

Three things here that burn people if they miss them. First, the shebang #!/usr/bin/env node at the top is load-bearing — without it, the bin entry in package.json won't execute after publish. Second, every log statement uses console.error, not console.log. Stdio transport multiplexes JSON-RPC messages over stdout; a stray console.log corrupts the stream and your server dies silently. Third, Zod schemas double as runtime validation AND the JSON Schema the client sees — the descriptions you write end up in tool descriptions the model reads when deciding whether to call.

Watch out: If you're porting a Node server from CommonJS, you'll hit the ESM import wall. The SDK is ESM-only. Adding "type": "module" is mandatory, and every local import needs the .js extension even though you're writing .ts. The TypeScript compiler doesn't add it; you do.

Step 3: Debug With MCP Inspector

You cannot debug MCP servers with console.log. You need the MCP Inspector — a web UI that spawns your server, lets you list tools/resources/prompts, invoke them interactively, and watch JSON-RPC traffic in real time. Install and launch in one command:

npm run inspector
# or directly:
GITHUB_TOKEN=ghp_xxx npx @modelcontextprotocol/inspector tsx src/index.ts

It opens a browser UI at http://localhost:6274. Click Connect, then Tools — your three tools appear with their schemas. Click list_issues, fill in owner: "vercel" and repo: "next.js", and invoke. You see the exact JSON-RPC request, the response payload, and any stderr output your server emitted. This is 10x faster than iterating through Claude Desktop restarts.

The first time I built an MCP server I spent two hours debugging a "server not responding" error that turned out to be a console.log in a @octokit/rest retry interceptor. Inspector would have caught it in 30 seconds — the corrupted stream shows up in the message log as unparseable JSON.

Step 4: Wire Into Claude Desktop, Cursor, and Claude Code

Three clients, same config shape. You're editing JSON files that tell each client how to launch your server.

Claude Desktop

Config lives at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows. For local development, point at the dev script:

{
  "mcpServers": {
    "github-issues": {
      "command": "npx",
      "args": ["tsx", "/absolute/path/to/mcp-github-issues/src/index.ts"],
      "env": {
        "GITHUB_TOKEN": "ghp_your_token_here"
      }
    }
  }
}

Restart Claude Desktop (fully quit, not just close the window). In a new conversation, type "list open issues in vercel/next.js" — Claude sees your tools via the protocol and calls list_issues automatically.

Cursor

Open Settings → MCP, click Add New MCP Server. Paste the same shape — Cursor uses identical config. If you're running multiple coding assistants side by side, each reads MCP servers from its own config file; they don't share state.

Claude Code

Drop a .mcp.json file in your project root:

{
  "mcpServers": {
    "github-issues": {
      "command": "npx",
      "args": ["-y", "mcp-github-issues"],
      "env": {
        "GITHUB_TOKEN": "ghp_your_token_here"
      }
    }
  }
}

Claude Code auto-detects .mcp.json per project, which is the cleanest pattern — different projects can have different server sets. The -y tells npx to auto-install the published package (Step 6); during local development, swap for the tsx form above.

Step 5: Choose Transport — stdio vs SSE vs Streamable HTTP

The SDK supports three transports. Your choice depends on who runs the server and where.

TransportUse whenAuthDeploy
stdioLocal server, one user, one machineOS process isolationnpm package launched by host
SSE (legacy)Remote server, HTTP + Server-Sent EventsBearer token or OAuth 2.1Any Node host
Streamable HTTPRemote, proxy/CDN compatible (2026 default)OAuth 2.1 with resource indicatorsEdge or container

Stdio is what we've built so far and the right choice for 90% of cases. You publish the server, a user adds it to their config, the host spawns it as a subprocess per session. No ports, no auth layer, no TLS certs.

For remote servers, skip SSE and go straight to Streamable HTTP (the 2026 default in SDK 1.13+; SSE is kept for backward compat but being retired). Here's a minimal HTTP variant using Hono as a lightweight Node server framework:

import { Hono } from "hono";
import { serve } from "@hono/node-server";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

const app = new Hono();

app.all("/mcp", async (c) => {
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: () => crypto.randomUUID(),
  });
  await server.connect(transport);
  return transport.handleRequest(c.req.raw, c.res);
});

serve({ fetch: app.fetch, port: 3000 });

Once deployed behind TLS with OAuth, clients add the URL instead of a command — {"url": "https://mcp.example.com/mcp", "auth": {"type": "oauth"}}.

Step 6: Package and Publish to npm

The publish step turns your prototype into something users can install with one command. Compile, verify the binary, publish.

# Compile TypeScript to dist/
npm run build

# Make the output executable (stdio servers need the shebang to be exec)
chmod +x dist/index.js

# Local smoke test — simulate what users will run
GITHUB_TOKEN=ghp_xxx npx . --help || true

# Dry-run publish to see what's in the tarball
npm publish --dry-run

# Real publish
npm publish --access public

Four things to verify before publishing. The tarball should be under 100 KB — if it's 5 MB, you forgot to set "files": ["dist"] and you're shipping node_modules. The bin path must match the compiled output exactly (dist/index.js, not src/index.ts). The shebang must be in the compiled JS — TypeScript strips it by default unless you use tsc --preserveValueImports or add it in a postbuild step. And the package name needs to be available; MCP servers conventionally use mcp- or @scope/mcp- prefixes.

Pro tip: Add a prepublishOnly script that runs build and a quick Inspector self-check. It saved me from shipping a server with a broken tool schema twice. The lifecycle hook runs before the tarball is assembled, so a failure aborts the publish — belt and suspenders for a step you cannot undo.

Step 7: Production Hardening

The version above works and ships. It's not production-ready for a multi-tenant server. Four gaps close the distance.

Authentication

Stdio servers inherit the user's environment — if GITHUB_TOKEN is in their config, they authorized it. For HTTP servers, implement OAuth 2.1 with PKCE. The SDK has helpers in @modelcontextprotocol/sdk/server/auth; wire the authorization server metadata into your /.well-known/oauth-authorization-server endpoint. Rotate the GITHUB_TOKEN on a schedule, and keep the scope minimal — public_repo for open-source work, never a full-access token.

Rate Limiting

GitHub's API allows 5,000 requests per hour per token. A chatty MCP client will burn through that in minutes if unconstrained. Add a token-bucket limiter in the tool handler — the bottleneck npm package is a one-liner wrapper. The wire protocol supports returning structured errors; return { isError: true, content: [{ type: "text", text: "Rate limited, retry in Ns" }] } rather than throwing, and the model will back off.

Observability

Log every tool invocation with tool name, caller session ID, duration, and error state. Ship to stderr in stdio mode (the host pipes it to logs), or to OpenTelemetry for HTTP. Noisy tools are almost always the source of MCP latency — you can't fix what you can't see. If you're already using OpenTelemetry for distributed tracing, reuse the instrumentation.

Performance

Cold starts matter less than you'd think because the host holds the process for the session. What matters is keeping tool handlers fast — anything over 2 seconds feels laggy in an interactive chat. Cache aggressively, batch when you can, and follow the Node.js performance tuning playbook for memory and event-loop hygiene. If you're curious about runtime picks, Bun vs Node.js vs Deno benchmarks show Bun starting the SDK ~40% faster than Node 22 — relevant for servers spawned per session.

Step 8: Compose With Other MCP Servers

Single-purpose servers compose. Your GitHub issues server combined with a PostgreSQL server and a Slack server gives Claude a complete operations toolkit without you writing a mega-server. This is the core benefit of the protocol — one tool per concern, composed at the host layer. AI agent frameworks like LangGraph and AutoGen are converging on MCP as the integration layer too, so a well-built server ports across agent runtimes with zero changes.

The ecosystem signals tell you which servers are healthy. Look at download count on npm, commit recency, and whether the server is listed on the official MCP servers directory. Low-quality servers typically have stale READMEs, no examples, and vague tool descriptions — the model can't use what it can't understand.

flowchart LR
  A[Claude Desktop / Cursor / Claude Code] -->|stdio JSON-RPC| B[Your MCP Server]
  B -->|Tool call| C[GitHub API]
  B -->|Tool call| D[(Postgres)]
  B -->|Resource| E[/README files/]
  A -.SSE/Streamable HTTP.-> F[Remote MCP Server]
  F -->|OAuth 2.1| G[External Service]

Common Pitfalls

The four mistakes that cost me hours on my first servers. Avoid them.

  1. Logging to stdout. Every console.log in stdio mode corrupts the protocol stream. Use console.error or a logger configured to stderr.
  2. Thin tool descriptions. "Lists issues" is useless. "List issues in a GitHub repository. Filter by state (open/closed) and labels. Returns issue number, title, state, labels, and URL" tells the model when to call it.
  3. Missing the shebang. Without #!/usr/bin/env node in the compiled output, the npm bin entry won't execute. Test with chmod +x dist/index.js && ./dist/index.js.
  4. Forgetting .js extensions on local imports. ESM demands them. TypeScript will compile without complaining, then fail at runtime.

Frequently Asked Questions

How do I build an MCP server in TypeScript?

Install @modelcontextprotocol/sdk and zod, create an McpServer instance, register tools with server.tool(name, description, zodSchema, handler), connect a StdioServerTransport, and run with tsx during development. The SDK handles all JSON-RPC framing — you only write tool handlers and Zod schemas. A minimal working server is under 50 lines.

What is the difference between server.tool, server.resource, and server.prompt?

Tools are model-invoked functions (side effects allowed), resources are application-fetched read-only data addressed by URI, and prompts are user-invoked templated workflows. Use tools for actions like creating an issue, resources for context like a README, and prompts for structured multi-step workflows the user explicitly triggers from a slash-command menu.

How do I test an MCP server without Claude Desktop?

Use the MCP Inspector — npx @modelcontextprotocol/inspector tsx src/index.ts — which opens a browser UI at localhost:6274. It lets you list tools, invoke them with custom inputs, inspect resources, and see the exact JSON-RPC traffic. It's 10x faster than iterating through Claude Desktop restarts, and catches stream-corruption bugs (stray console.log) immediately.

Should I use stdio or SSE transport for my MCP server?

Use stdio if the server runs on the user's machine — no networking, no auth, the host spawns it as a subprocess. Use Streamable HTTP (not SSE, which is legacy) for remote servers serving multiple clients, with OAuth 2.1 for auth. About 90% of MCP servers should be stdio; reach for HTTP only when you need centralized credentials or shared infrastructure.

How do I publish an MCP server to npm?

Set "type": "module" and a bin entry pointing at your compiled output (dist/index.js), add a #!/usr/bin/env node shebang at the top of the source, run tsc to build, chmod +x dist/index.js to make it executable, then npm publish --access public. Verify with npm publish --dry-run first — the tarball should be under 100 KB if files is configured correctly.

How do I add an MCP server to Cursor or Claude Code?

For Cursor, go to Settings → MCP → Add New MCP Server and paste the same JSON shape used for Claude Desktop. For Claude Code, create a .mcp.json file in your project root with mcpServers keyed by server name. Both use identical config (command, args, env). Cursor reads from user settings; Claude Code reads per-project from .mcp.json, so different repos can have different server sets.

Can MCP servers call external APIs like GitHub or Slack?

Yes — that's the primary use case. Wrap the API client (@octokit/rest for GitHub, @slack/web-api for Slack) in tool handlers, validate inputs with Zod, and return the response as structured text. The model reads the JSON and reasons about it. Handle rate limits inside the handler (return isError: true on 429 so the model backs off) and keep credentials in environment variables, never in code.

From Boilerplate to Shipped

The protocol's job is to get out of your way so you can focus on the tool logic, and once you've built one server you've built them all — the pattern repeats. Start with the GitHub issues server above as a template, swap the Octokit calls for whatever API you want to wrap, and you're 80% done before you've written a line of protocol code. Build MCP server TypeScript skills now and every new MCP client you adopt gets your integrations for free.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.