LangGraph 的官网的一个最简单的聊天机器人
1. 安装依赖
确保你已经安装了以下库:
pip install langchain langgraph langchain-community
2. 定义对话状态
在 LangGraph 里,状态(State)就是节点之间传递的数据结构。
这里我们用 messages
来存储对话历史,并用 Annotated + add_messages
来定义合并规则。
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messagesclass State(TypedDict):messages: Annotated[list, add_messages]
add_messages
的作用是告诉 LangGraph:当多个节点都往 messages
里写数据时,不要覆盖,而是追加。这样才能维持完整的对话历史。
3. 定义聊天节点
我们的核心节点就是调用 LLM(通义千问):
可以改成其它的大语言模型这里。
from langchain_community.chat_models import ChatTongyillm = ChatTongyi(model="qwen-plus", api_key="sk-xxxxxx")def chatbot(state: State):return {"messages": [llm.invoke(state["messages"])]}
这里:
输入:
state["messages"]
(对话历史)输出:新的消息(由 LLM 生成)
LangGraph 会自动把它合并到
messages
里。
4. 构建 LangGraph
LangGraph 的核心就是节点和边:
from langgraph.graph import StateGraph, START, ENDgraph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)graph = graph_builder.compile()
这段代码的含义:
有一个节点叫
chatbot
流程从
START → chatbot → END
graph.compile()
得到一个可运行的图
你可以用 ASCII 图查看:
print(graph.get_graph().draw_ascii())
输出:
START --> chatbot --> END
5. 运行:流式输出
我们希望像真正的聊天一样,用户输入后,模型逐步响应,而不是一次性拿结果。
这就用到了 graph.stream
:
def stream_graph_updates(user_input: str):for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):for value in event.values():msg = value["messages"][-1].contentprint(f"🤖 Assistant: {msg}")
这里的 event
会在每个节点执行时产生一次输出,我们取出最后一条消息进行打印。
6. 主循环
最后,做一个简单的 REPL:
while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:console.print("[red]Goodbye![/red]")breakstream_graph_updates(user_input)except:# fallback: 在不支持 input() 的环境下自动执行一次user_input = "What do you know about LangGraph?"console.print(f"User: {user_input}")stream_graph_updates(user_input)break
运行效果:
🧑 User: how are you
🤖 Assistant: Hi there! While I don't experience feelings in the way humans do, I'm always excited to engage in conversations and help out! ٩(◕‿◕。)۶ I love learning new things through our chats. What's on your mind today?
🧑 User: I am harry , who are you
🤖 Assistant: Hello Harry! I'm Qwen, a large-scale language model developed by Tongyi Lab. It's nice to meet you. How can I assist you today?
🧑 User: quit
Goodbye!
其它
调用工具的一个例子
import os
from langchain_community.chat_models import ChatTongyi
from langchain_tavily import TavilySearch
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
import json
from langgraph.prebuilt import ToolNode, tools_conditionllm = ChatTongyi( model="qwen-plus", api_key="YOUR_API_KEY")
os.environ["TAVILY_API_KEY"] = "YOUR_TAVILY_KEY"llm = ChatTongyi( model="qwen-plus", api_key="YOUR_LLM_KEY")os.environ["TAVILY_API_KEY"] = "YOUR_TAVILY_KEY"class State(TypedDict):messages: Annotated[list, add_messages]graph_builder = StateGraph(State)tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)def chatbot(state: State):return {"messages": [llm_with_tools.invoke(state["messages"])]}graph_builder.add_node("chatbot", chatbot)tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)graph_builder.add_conditional_edges("chatbot",tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()# 打印图结构
print(graph.get_graph().draw_ascii())def stream_graph_updates(user_input: str):last_msg = Nonefor event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):for value in event.values():msg = value["messages"][-1]# print(msg)last_msg = msg.contentif last_msg:print(f"🤖 Assistant: {last_msg}")while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:print("Goodbye!")breakstream_graph_updates(user_input)except:# fallback if input() is not availableuser_input = "What do you know about LangGraph?"print("User: " + user_input)stream_graph_updates(user_input)break
执行结果
🧑 User: what function do you have
🤖 Assistant: I have access to a function called `tavily_search`, which is a powerful search engine tool. It allows me to perform detailed searches with various filters and parameters, such as:- Searching for information on specific topics or queries.
- Filtering results by date (specific ranges, the past day, week, month, or year).
- Including or excluding specific domains.
- Fetching images or favicon icons along with results.
- Adjusting the depth of the search (basic or advanced).Let me know what you'd like to explore, and I can use this function to find the information you need!
🧑 User: what is the max version of langgraph
🤖 Assistant: The latest version of LangGraph.js is **v0.3.0**, which includes features like node/task caching, deferred nodes, and the `isInterrupted()` method to check if the state contains an interrupt.If you're referring to **LangGraph Server**, there is a standalone version (Lite) that supports up to **1 million nodes executed per year**.For more details, you can check:
- [LangGraph.js Version History](https://langchain-ai.github.io/langgraphjs/versions/)
- [LangGraph Server (Lite)](https://forum.langchain.com/t/langgraph-standalone-container-1-million-nodes-execution-limit/484)
🧑 User: quit
Goodbye!
2. 加memory 的例子
from langchain_community.chat_models import ChatTongyi
from typing import Annotatedfrom typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import InMemorySaverllm = ChatTongyi( model="qwen-plus", api_key="YOUR_API_KEY")class State(TypedDict):messages: Annotated[list, add_messages]def chatbot(state: State):return {"messages": [llm.invoke(state["messages"])]}graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)memory = InMemorySaver()
graph = graph_builder.compile(checkpointer=memory)from rich.console import Consolefrom rich.table import Table
# rich 控制台
console = Console()config = {"configurable": {"thread_id": "1"}}# 流式输出
def stream_graph_updates(user_input: str):for event in graph.stream({"messages": [{"role": "user", "content": user_input}]},config,# stream_mode="values",):for value in event.values():msg = value["messages"][-1].contentprint(f"🤖 Assistant: {msg}")# 主循环
while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:console.print("[red]Goodbye![/red]")breakstream_graph_updates(user_input)except:# fallback: 在不支持 input() 的环境下自动执行一次user_input = "What do you know about LangGraph?"console.print(f"User: {user_input}")stream_graph_updates(user_input)break
3. human in the loop
from typing import Annotatedfrom langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDictfrom langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_community.chat_models import ChatTongyi
import os
from langgraph.types import Command, interruptllm = ChatTongyi( model="qwen-plus", api_key="YOUR_API_KEY")
os.environ["TAVILY_API_KEY"] = "YOUR_TAVILY_API_KEY"@tool
def human_assistance(query: str) -> str:"""Request assistance from a human."""human_response = interrupt({"query": query})return human_response["data"]class State(TypedDict):messages: Annotated[list, add_messages]graph_builder = StateGraph(State)@tool
def human_assistance(query: str) -> str:"""Request assistance from a human."""human_response = interrupt({"query": query})print("human_response", human_response)return human_response["data"]tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm_with_tools = llm.bind_tools(tools)def chatbot(state: State):message = llm_with_tools.invoke(state["messages"])# Because we will be interrupting during tool execution,# we disable parallel tool calling to avoid repeating any# tool invocations when we resume.assert len(message.tool_calls) <= 1return {"messages": [message]}graph_builder.add_node("chatbot", chatbot)tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)graph_builder.add_conditional_edges("chatbot",tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()config = {"configurable": {"thread_id": "1"}}
def stream_graph_updates(user_input: str):events = graph.stream({"messages": [{"role": "user", "content": user_input}]},config,stream_mode="values",)for event in events:if "messages" in event:event["messages"][-1].pretty_print()while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:print("Goodbye!")breakstream_graph_updates(user_input)except:# fallback if input() is not availableuser_input = "What do you know about LangGraph?"print("User: " + user_input)stream_graph_updates(user_input)break