LangGraph 官方教程:聊天机器人之五
五、添加人机协作控制
代理可能不可靠,可能需要人工输入才能成功完成任务。同样,对于某些操作,你可能希望在运行之前需要人工批准,以确保一切按预期运行。
LangGraph 的持久化层支持人机协作工作流,允许基于用户反馈暂停和恢复执行。此功能的主要接口是 interrupt 函数。在节点内调用 interrupt 将暂停执行。通过传入 Command,可以与来自人类的新输入一起恢复执行。
--- python ---
interrupt 在人体工程学上类似于 Python 的内置 input() 函数,但有一些注意事项。
--- js ---
interrupt 在人体工程学上类似于 Node.js 的内置 readline.question() 函数,但有一些注意事项。
1. 添加 human_assistance 工具
从为聊天机器人添加记忆教程的现有代码开始,向聊天机器人添加 human_assistance 工具。此工具使用 interrupt 从人类接收信息。
让我们首先选择一个聊天模型:
--- python ---
{% include-markdown "../../../snippets/chat_model_tabs.md" %}
--- js ---
// Add your API key here
process.env.ANTHROPIC_API_KEY = "YOUR_API_KEY";
我们现在可以通过一个额外的工具将其合并到我们的 `StateGraph` 中:
--- python ---
from typing import Annotatedfrom langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDictfrom langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_conditionfrom langgraph.types import Command, interruptclass State(TypedDict):messages: Annotated[list, add_messages]graph_builder = StateGraph(State)@tool
def human_assistance(query: str) -> str:"""Request assistance from a human."""human_response = interrupt({"query": query})return human_response["data"]tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm_with_tools = llm.bind_tools(tools)def chatbot(state: State):message = llm_with_tools.invoke(state["messages"])# Because we will be interrupting during tool execution,# we disable parallel tool calling to avoid repeating any# tool invocations when we resume.assert len(message.tool_calls) <= 1return {"messages": [message]}graph_builder.add_node("chatbot", chatbot)tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)graph_builder.add_conditional_edges("chatbot",tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
--- python ---
import { interrupt, MessagesZodState } from "@langchain/langgraph";
import { ChatAnthropic } from "@langchain/anthropic";
import { TavilySearch } from "@langchain/tavily";
import { tool } from "@langchain/core/tools";
import { z } from "zod";const humanAssistance = tool(async ({ query }) => {const humanResponse = interrupt({ query });return humanResponse.data;},{name: "humanAssistance",description: "Request assistance from a human.",schema: z.object({query: z.string().describe("Human readable question for the human"),}),}
);const searchTool = new TavilySearch({ maxResults: 2 });
const tools = [searchTool, humanAssistance];const llmWithTools = new ChatAnthropic({model: "claude-3-5-sonnet-latest",
}).bindTools(tools);async function chatbot(state: z.infer<typeof MessagesZodState>) {const message = await llmWithTools.invoke(state.messages);// Because we will be interrupting during tool execution,// we disable parallel tool calling to avoid repeating any// tool invocations when we resume.if (message.tool_calls && message.tool_calls.length > 1) {throw new Error("Multiple tool calls not supported with interrupts");}return { messages: message };
}