mcp-builder
构建高质量MCP(模型上下文协议)服务器指南:通过精心设计的工具让大语言模型与外部服务交互。适用于使用Python(FastMCP)或Node/TypeScript(MCP SDK)开发MCP服务器以集成外部API或服务的场景。
MCP Server Development Guide
Overview
To create high-quality MCP (Model Context Protocol) servers that enable LLMs to effectively interact with external services, use this skill. An MCP server provides tools that allow LLMs to access external services and APIs. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks using the tools provided.
Process
🚀 High-Level Workflow
Creating a high-quality MCP server involves four main phases:
Phase 1: Deep Research and Planning
1.1 Understand Agent-Centric Design Principles
Before diving into implementation, understand how to design tools for AI agents by reviewing these principles:
Build for Workflows, Not Just API Endpoints:
schedule_event that both checks availability and creates event)Optimize for Limited Context:
Design Actionable Error Messages:
Follow Natural Task Subdivisions:
Use Evaluation-Driven Development:
1.3 Study MCP Protocol Documentation
Fetch the latest MCP protocol documentation:
Use WebFetch to load: https://modelcontextprotocol.io/llms-full.txt
This comprehensive document contains the complete MCP specification and guidelines.
1.4 Study Framework Documentation
Load and read the following reference files:
For Python implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdFor Node/TypeScript implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md1.5 Exhaustively Study API Documentation
To integrate a service, read through ALL available API documentation:
To gather comprehensive information, use web search and the WebFetch tool as needed.
1.6 Create a Comprehensive Implementation Plan
Based on your research, create a detailed plan that includes:
Tool Selection:
Shared Utilities and Helpers:
Input/Output Design:
Error Handling Strategy:
Phase 2: Implementation
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
2.1 Set Up Project Structure
For Python:
.py file or organize into modules if complex (see 🐍 Python Guide)For Node/TypeScript:
package.json and tsconfig.json2.2 Implement Core Infrastructure First
To begin implementation, create shared utilities before implementing tools:
2.3 Implement Tools Systematically
For each tool in the plan:
Define Input Schema:
Write Comprehensive Docstrings/Descriptions:
Implement Tool Logic:
Add Tool Annotations:
readOnlyHint: true (for read-only operations)destructiveHint: false (for non-destructive operations)idempotentHint: true (if repeated calls have same effect)openWorldHint: true (if interacting with external systems)2.4 Follow Language-Specific Best Practices
At this point, load the appropriate language guide:
For Python: Load 🐍 Python Implementation Guide and ensure the following:
model_configFor Node/TypeScript: Load ⚡ TypeScript Implementation Guide and ensure the following:
server.registerTool properly.strict()any types - use proper typesnpm run build)Phase 3: Review and Refine
After initial implementation:
3.1 Code Quality Review
To ensure quality, review the code for:
3.2 Test and Build
Important: MCP servers are long-running processes that wait for requests over stdio/stdin or sse/http. Running them directly in your main process (e.g., python server.py or node dist/index.js) will cause your process to hang indefinitely.
Safe ways to test the server:
timeout 5s python server.pyFor Python:
python -m py_compile your_server.pyFor Node/TypeScript:
npm run build and ensure it completes without errors3.3 Use Quality Checklist
To verify implementation quality, load the appropriate checklist from the language-specific guide:
Phase 4: Create Evaluations
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
Load ✅ Evaluation Guide for complete evaluation guidelines.
4.1 Understand Evaluation Purpose
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
4.2 Create 10 Evaluation Questions
To create effective evaluations, follow the process outlined in the evaluation guide:
4.3 Evaluation Requirements
Each question must be:
4.4 Output Format
Create an XML file with this structure:
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>Reference Files
📚 Documentation Library
Load these resources as needed during development:
Core MCP Documentation (Load First)
https://modelcontextprotocol.io/llms-full.txt - Complete MCP specification- Server and tool naming conventions
- Response format guidelines (JSON vs Markdown)
- Pagination best practices
- Character limits and truncation strategies
- Tool development guidelines
- Security and error handling standards
SDK Documentation (Load During Phase 1/2)
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdhttps://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.mdLanguage-Specific Implementation Guides (Load During Phase 2)
- Server initialization patterns
- Pydantic model examples
- Tool registration with
@mcp.tool- Complete working examples
- Quality checklist
- Project structure
- Zod schema patterns
- Tool registration with
server.registerTool- Complete working examples
- Quality checklist
Evaluation Guide (Load During Phase 4)
- Question creation guidelines
- Answer verification strategies
- XML format specifications
- Example questions and answers
- Running an evaluation with the provided scripts