Home
Login

A lightweight wrapper to make Anthropic's Model Content Protocol (MCP) tools compatible with LangChain and LangGraph.

MITPython 2.0klangchain-ai Last Updated: 2025-06-09

LangChain MCP Adapters - Project Details

Project Overview

LangChain MCP Adapters is a lightweight wrapper library designed to seamlessly integrate Anthropic's Model Content Protocol (MCP) tools with the LangChain and LangGraph ecosystems. This project addresses compatibility issues between different AI tool frameworks, allowing developers to directly use MCP tools within the LangChain/LangGraph environment, building more powerful and flexible AI agent applications.

Project Address: https://github.com/langchain-ai/langchain-mcp-adapters

Core Features and Characteristics

🔧 Tool Conversion and Adaptation

  • MCP to LangChain Tool Conversion: Automatically converts MCP tools into LangChain tool format.
  • Seamless Integration: Converted tools can be directly used in LangGraph agents.
  • Type Safety: Maintains the original tool's type information and parameter validation.
  • Asynchronous Support: Fully supports asynchronous tool operations.

📦 Multi-Server Client

  • Multi-Server Connection: Simultaneously connects to multiple MCP servers.
  • Unified Tool Management: Loads and manages tools from different servers.
  • Server Configuration: Supports flexible server parameter configuration.
  • Connection Pool Management: Efficient connection resource management.

🌐 Transport Protocol Support

  • STDIO Transport: Supports standard input/output transport protocol.
  • SSE Transport: Supports server-sent events transport protocol.
  • Multi-Protocol Mixing: Different servers can use different transport protocols.
  • Automatic Reconnection: Automatic reconnection mechanism when the connection is lost.

🤖 LangGraph Integration

  • React Agent: Seamless integration with LangGraph's React agent.
  • Asynchronous Agent: Supports asynchronous agent execution.
  • Toolchain Combination: Supports complex toolchains and workflows.
  • State Management: Maintains agent execution state.

Technical Architecture

Core Components

LangChain/LangGraph Application
         ↓
LangChain MCP Adapters
         ↓
MCP Client Implementation
         ↓
Multiple MCP Servers (Math, Weather, etc.)

Tool Conversion Process

  1. MCP Tool Discovery: Retrieves a list of available tools from the MCP server.
  2. Tool Metadata Parsing: Parses tool name, description, and parameters.
  3. LangChain Tool Creation: Creates compatible LangChain tool objects.
  4. Agent Integration: Registers the tools with the LangGraph agent.
  5. Execution Forwarding: Forwards LangChain tool calls to the MCP server.

Installation and Usage

Quick Installation

# Basic installation
pip install langchain-mcp-adapters

# Complete development environment
pip install langchain-mcp-adapters langgraph langchain-openai

Environment Configuration

# Set OpenAI API key
export OPENAI_API_KEY=<your_api_key>

Usage Examples

Basic Example: Math Server

1. Create MCP Server

# math_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers"""
    return a * b

if __name__ == "__main__":
    mcp.run(transport="stdio")

2. Create LangGraph Agent

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

# Initialize model
model = ChatOpenAI(model="gpt-4o")

# Configure server parameters
server_params = StdioServerParameters(
    command="python",
    args=["/path/to/math_server.py"],
)

# Create agent and execute
async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()
        
        # Load MCP tools
        tools = await load_mcp_tools(session)
        
        # Create agent
        agent = create_react_agent(model, tools)
        
        # Execute query
        response = await agent.ainvoke({
            "messages": "what's (3 + 5) x 12?"
        })

Advanced Example: Multi-Server Integration

1. Weather Server

# weather_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Weather")

@mcp.tool()
async def get_weather(location: str) -> str:
    """Get weather for location."""
    return f"It's always sunny in {location}"

if __name__ == "__main__":
    mcp.run(transport="sse")

2. Multi-Server Client

from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4o")

# Multi-server configuration
async with MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["/path/to/math_server.py"],
        "transport": "stdio",
    },
    "weather": {
        "url": "http://localhost:8000/sse",
        "transport": "sse",
    }
}) as client:
    # Create agent
    agent = create_react_agent(model, client.get_tools())
    
    # Math operation
    math_response = await agent.ainvoke({
        "messages": "what's (3 + 5) x 12?"
    })
    
    # Weather query
    weather_response = await agent.ainvoke({
        "messages": "what is the weather in NYC?"
    })

LangGraph API Server Integration

1. Graph Configuration File

# graph.py
from contextlib import asynccontextmanager
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-3-5-sonnet-latest")

@asynccontextmanager
async def make_graph():
    async with MultiServerMCPClient({
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            "url": "http://localhost:8000/sse",
            "transport": "sse",
        }
    }) as client:
        agent = create_react_agent(model, client.get_tools())
        yield agent

2. LangGraph Configuration

{
  "dependencies": ["."],
  "graphs": {
    "agent": "./graph.py:make_graph"
  }
}

Application Scenarios and Advantages

🎯 Application Scenarios

Enterprise-Level AI Agents

  • Multi-Function Integration: Integrates various business tools and APIs.
  • Workflow Automation: Builds complex business process automation.
  • Data Processing: Integrates different data sources and processing tools.

Developer Toolchains

  • Code Generation: Integrates code generation and analysis tools.
  • Test Automation: Builds intelligent testing agents.
  • DevOps Integration: Automates deployment and monitoring.

Research and Education

  • Scientific Computing: Integrates mathematical and scientific computing tools.
  • Data Analysis: Builds intelligent data analysis assistants.
  • Teaching Assistants: Creates interactive learning tools.

✨ Technical Advantages

Ecosystem Interoperability

  • Standardized Interface: Follows MCP standard protocol.
  • Broad Compatibility: Seamless integration with the LangChain ecosystem.
  • Extensibility: Supports custom tool and protocol extensions.

Development Efficiency Improvement

  • Quick Integration: Integrates MCP tools with just a few lines of code.
  • Type Safety: Complete type hints and validation.
  • Error Handling: Comprehensive error handling and retry mechanisms.

Performance and Reliability

  • Asynchronous Support: High-performance asynchronous operations.
  • Connection Management: Intelligent connection pool and reconnection mechanisms.
  • Resource Optimization: Efficient resource usage and management.

Technical Specifications

Supported Protocol Versions

  • MCP Protocol: Compatible with the latest MCP protocol specifications.
  • LangChain: Supports LangChain 0.1+ versions.
  • LangGraph: Supports the latest version of LangGraph.

Transport Protocols

  • STDIO: Standard input/output transport.
  • SSE: Server-sent events.
  • HTTP: HTTP-based RESTful API.
  • WebSocket: Real-time bidirectional communication (planned).

Tool Type Support

  • Synchronous Tools: Traditional synchronous function tools.
  • Asynchronous Tools: High-performance asynchronous tools.
  • Streaming Tools: Supports streaming output.
  • Stateful Tools: Supports state management tools.

Best Practices

🔧 Development Recommendations

Tool Design

  • Single Function: Each tool focuses on a single function.
  • Clear Parameters: Provides clear parameter descriptions and types.
  • Error Handling: Implements comprehensive error handling logic.
  • Complete Documentation: Provides detailed tool documentation.

Performance Optimization

  • Connection Reuse: Reuses MCP server connections.
  • Asynchronous Priority: Prioritizes the use of asynchronous tools and operations.
  • Resource Management: Releases unnecessary resources promptly.
  • Caching Strategy: Uses caching reasonably to improve performance.

Security Considerations

  • Permission Control: Implements appropriate permission checks.
  • Input Validation: Strictly validates input parameters.
  • Logging: Logs key operations and errors.
  • Secret Management: Securely manages API keys and credentials.

Summary

The LangChain MCP Adapters project is an important infrastructure component in the AI tool ecosystem, successfully bridging the gap between the MCP protocol and the LangChain framework. Through this adapter, developers can:

🎯 Core Value

  • Unified Tool Ecosystem: Unifies AI tools under different protocols into the LangChain ecosystem.
  • Development Efficiency Improvement: Significantly reduces the complexity and workload of AI agent development.
  • Functional Extensibility: Easily integrates various third-party tools and services.
  • Standardized Support: Follows industry standards, ensuring long-term compatibility.

🌟 Project Highlights

  • Lightweight Design: Minimizes dependencies, easy to integrate and deploy.
  • Complete Functionality: Covers the entire process from tool conversion to agent execution.
  • Production-Ready: Provides enterprise-level stability and performance.
  • Community-Driven: Active open-source community and continuous feature iteration.

🔮 Application Prospects

With the rapid development of AI agent technology, tool integration and interoperability will become increasingly important. As a bridge connecting different AI tool ecosystems, LangChain MCP Adapters will play a key role in future AI application development. It not only simplifies current development processes but also lays a solid foundation for building more intelligent and feature-rich AI agent applications.

Whether you are an AI application developer, enterprise technology decision-maker, or researcher, this project is worth exploring and applying. It represents the best practices in the field of AI tool integration and will help you build more powerful and flexible AI solutions.