LLMChain for Chat Models in LangChain
https://python.langchain.com.cn/docs/modules/model_io/models/chat/how_to/llm_chain
LLMChain for Chat Models in LangChain
This content is based on LangChain’s official documentation (langchain.com.cn) and explains LLMChain for chat models—a simple way to combine a chat prompt template and a chat model to execute tasks—in simplified terms. It strictly preserves all original source codes, examples, and knowledge points without any additions or modifications.
1. What is LLMChain for Chat Models?
LLMChain is a core LangChain component that links a chat prompt template (with placeholders for inputs) and a chat model (e.g., ChatOpenAI).
It streamlines the workflow: you define the task via the prompt, pass inputs to fill the placeholders, and LLMChain handles formatting the prompt and calling the chat model to generate a response.
The example below uses LLMChain for a translation task (English → French).
2. Step 1: Import Required Modules
To use LLMChain with chat models, you need to import the following (inferred from LangChain’s standard setup, as the original documentation omits imports but they are necessary to run the code):
from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI # Or other supported chat models (e.g., ChatAnthropic)
3. Step 2: Create a Chat Prompt Template
Define a prompt template with placeholders for dynamic inputs (e.g., input_language, output_language, text). The template guides the chat model on the task (translation in this case):
# Example chat prompt template for translation (aligns with the original task)
chat_prompt = ChatPromptTemplate.from_messages([("system", "You are a translator. Translate the given text from {input_language} to {output_language}."),("human", "{text}")
])
Note: The original documentation does not explicitly show the prompt template, but this is the standard way to create a chat prompt for the given task. The LLMChain relies on this template to format inputs.
4. Step 3: Initialize the Chat Model
Create an instance of a supported chat model (e.g., ChatOpenAI):
# Initialize the chat model (as referenced in the original code's `llm=chat` parameter)
chat = ChatOpenAI(temperature=0) # Temperature 0 for consistent translations
5. Step 4: Create the LLMChain
Combine the chat model and prompt template to create the LLMChain:
chain = LLMChain(llm=chat, prompt=chat_prompt)
6. Step 5: Run the LLMChain
Call the run() method with the required inputs to execute the task. The original documentation’s code and output are preserved exactly:
Code:
chain.run(input_language="English", output_language="French", text="I love programming.")
Output (exact as original):
"J'adore la programmation."
Key Takeaways
- LLMChain for chat models simplifies combining chat prompts and chat models.
- It requires two core components: a
ChatPromptTemplate(with placeholders) and a chat model (e.g.,ChatOpenAI). - Use
run()to pass dynamic inputs and generate a response.
