Fake LLM in LangChain (one)
https://python.langchain.com.cn/docs/modules/model_io/models/llms/how_to/fake_llm
Fake LLM in LangChain
This content is based on LangChain’s official documentation (langchain.com.cn) and explains FakeLLM—a simulated LLM class for testing—in simplified terms. It strictly preserves all original source codes, examples, and knowledge points without any additions or modifications.
1. What is FakeLLM?
FakeLLM is a mock LLM class designed for testing purposes.
- It simulates LLM calls without connecting to a real LLM API.
- You predefine a list of responses, and
FakeLLMreturns them in order when called. - It’s useful for testing workflows (like agents) that rely on LLMs, without incurring API costs or waiting for real responses.
The example uses FakeListLLM (a subclass of FakeLLM) to simulate an LLM that guides an agent to calculate “2 + 2”.
2. Step 1: Import Required Modules
The code below imports all necessary LangChain classes—exactly as in the original documentation:
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
3. Step 2: Prepare Tools and Predefined Responses
Step 3.1: Load Tools
We load the python_repl tool (used to execute Python code) for the agent:
tools = load_tools(["python_repl"])
Step 3.2: Define Predefined Responses
We create a list of responses that FakeListLLM will return sequentially. These responses guide the agent to use the Python REPL and return the final answer:
responses = ["Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4"]
4. Step 3: Initialize FakeListLLM
Create an instance of FakeListLLM and pass the predefined responses list:
llm = FakeListLLM(responses=responses)
5. Step 4: Initialize the Agent
Combine the fake LLM, tools, and agent type to create an agent. The code is identical to the original:
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
6. Step 5: Run the Agent
Test the agent with the query “whats 2 + 2”. The fake LLM will return the predefined responses, guiding the agent to calculate the result.
Code:
agent.run("whats 2 + 2")
Output (exact as original, including formatting and color codes):
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3mAction: Python REPL
Action Input: print(2 + 2)[0m
Observation: [36;1m[1;3m4
[0m
Thought:[32;1m[1;3mFinal Answer: 4[0m
[1m> Finished chain.[0m
'4'
Key Takeaway
FakeListLLMuses predefined responses to simulate LLM behavior.- It’s ideal for testing LLM-dependent workflows (like agents) quickly and cost-free.
- The agent follows the fake LLM’s guided responses to complete the task (e.g., using Python REPL to calculate 2+2).
