LangGraph实战:MCP + SSE实现远程调用MCP Server
LangGraph实战:MCP + SSE实现远程调用MCP Server
概述
本文介绍如何在LangGraph实现和MCP Server进行调用。这里通过两种方式进行调用:一种是本地的stdio调用,一种是sse进行远程调用。
环境配置
URL请根据自己的IP地址进行修改。
LLM_TEMPERATURE=0.2
OLLAMA_MODEL='llama3.2:latest'
OLLAMA_BASE_URL='http://127.0.0.1:11434'
PY_PROJECT_DIR='/root/do_langmcp/'
SSE_BASE_URL='http://172.16.1.3:8000/sse'
定义MCP Server
开发MCP Server并使用使用sse协议来访问服务,这里通过Starlette来定义请求的处理方式。这样就可以通过Client来远程访问MCP Server。
import logging
import uvicornfrom mcp.server.fastmcp import FastMCP
from mcp.server.sse import SseServerTransport
from starlette.requests import Request
from starlette.routing import Mount, Route
from starlette.applications import Starlettelogging.basicConfig(format='%(levelname)s %(asctime)s - %(message)s', level=logging.INFO)logger = logging.getLogger('interest_mcp_server2')mcp = FastMCP('InterestCalculator')@mcp.tool()
def yearly_simple_interest(principal: float, rate:float) -> float:"""Tool to compute simple interest rate for a year."""logger.info(f'Simple interest -> Principal: {principal}, Rate: {rate}')return principal * rate / 100.00@mcp.tool()
def yearly_compound_interest(principal: float, rate:float) -> float:"""Tool to compute compound interest rate for a year."""logger.info(f'Compound interest -> Principal: {principal}, Rate: {rate}')return principal * (1 + rate / 100.0)if __name__ == "__main__":logger.info(f'Starting the interest calculator MCP server using SSE ...')async def handle_sse(request: Request):async with sse.connect_sse(request.scope, request.receive, request._send) as (read_stream, write_stream):await mcp._mcp_server.run(read_stream, write_stream, mcp._mcp_server.create_initialization_options())sse = SseServerTransport('/messages/')app = Starlette(routes=[Route("/sse", endpoint=handle_sse),Mount("/messages/", app=sse.handle_post_message),])uvicorn.run(app, host='172.16.1.3', port=8000)
定义shell_mcp_server.py
定义一个MCP Server,用来执行本地shell命令。该MCP Server使用shell命令执行。
import subprocess
from mcp.server.fastmcp import FastMCP
import logginglogging.basicConfig(format='%(levelname)s %(asctime)s - %(message)s', level=logging.INFO)
logger = logging.getLogger('shell_mcp_server')mcp = FastMCP('ShellCommandExecutor')@mcp.tool()
def execute_shell_command(command: str) -> str:"""Tool to execute shell commands"""logger.info(f'执行shell命令: {command}')try:result = subprocess.run(command, shell=True, check=True, text=True, capture_output=True)if result.returncode != 0:return f'Error executing shell command - {command}'return result.stdoutexcept subprocess.CalledProcessError as e:logger.error(e)if __name__ == '__main__':logger.info(f'启动shell执行MCP服务...')mcp.run(transport='stdio')
定义MCP Client
MCP Client使用两种方式来访问MCP Server,一种是sse远程调用的方式。一种是通过本地stdio方式。
,通过sse协议来访问MCP Server。
使用多tools来定义一个MCP Client,其中工具可以是本地协议也可以是远程调用的sse协议。
from dotenv import load_dotenv, find_dotenv
from langchain_ollama import ChatOllama
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agentimport asyncio
import logging
import oslogging.basicConfig(format='%(levelname)s %(asctime)s - %(message)s', level=logging.INFO)
logger = logging.getLogger('multi_mcp_client2')load_dotenv(find_dotenv())
home_dir = os.getenv('HOME')
llm_temperature = float(os.getenv('LLM_TEMPERATURE'))
ollama_model = os.getenv('OLLAMA_MODEL')
ollama_base_url = os.getenv('OLLAMA_BASE_URL')
py_project_dir = os.getenv('PY_PROJECT_DIR')
sse_base_url = os.getenv('SSE_BASE_URL')ollama_chat_llm = ChatOllama(base_url=ollama_base_url, model=ollama_model, temperature=llm_temperature)async def main():client = MultiServerMCPClient({"InterestCalculator": {"url": sse_base_url,"transport": "sse"},"ShellCommandExecutor": {"command": "python","args": [py_project_dir + 'shell_mcp_server.py'],"transport": "stdio"}})tools = await client.get_tools()logger.info(f'Loaded Multiple MCP Tools -> {tools}')# 使用多工具来进行ReACT Agent初始化,这里包括本地的mcp工具和通过远程sse调用的mcp server。agent = create_react_agent(ollama_chat_llm, tools)# Case - 1 : 复合interest定义agent_response_1 = await agent.ainvoke({'messages': 'explain the definition of compound interest'})logger.info(agent_response_1['messages'][::-1])# Case - 2 : 综合interest计算agent_response_2 = await agent.ainvoke({'messages': 'what is the compound interest for a principal of 1000 at rate 3.75 ?'})logger.info(agent_response_2['messages'][::-1])# Case - 3 : 执行shell命令agent_response_3 = await agent.ainvoke({'messages': 'Execute the free shell command to find how much system memory'})logger.info(agent_response_3['messages'][::-1])if __name__ == '__main__':asyncio.run(main())
总结
通过在LangGraph中调用MCP Server可以很好的扩展Agent的功能。通过MCP协议,可以规范调用,并能够更好的使用外部的各种MCP Server。
参考资料
- https://langchain-ai.github.io/langgraph/concepts/mcp/
- https://modelcontextprotocol.io/overview