当前位置: 首页 > news >正文

LangChain Few-Shot Prompt Templates(two)

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/few_shot_examples

This demo shows how the example selector picks the most relevant example to help the LLM answer a user’s question effectively.

Step 1: Understand What We’re Building

We’ll create a tool that:

  • Uses “semantic similarity” to find the most relevant example for a user’s question (instead of using all examples).
  • Teaches the LLM to answer questions by first asking follow-ups (just like the examples).
  • Produces a clear, step-by-step answer to a real user question.

Complete Code Demo

# 1. Import tools we need (from the original text's Part 2)
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
import os# 2. Set your OpenAI API key (get one from https://platform.openai.com/)
os.environ["OPENAI_API_KEY"] = "your-openai-api-key-here"# 3. Define our few-shot examples (same as original text)
examples = [{"question": "Who lived longer, Muhammad Ali or Alan Turing?","answer": 
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""},{"question": "When was the founder of craigslist born?","answer": 
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""},{"question": "Who was the maternal grandfather of George Washington?","answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""},{"question": "Are both the directors of Jaws and Casino Royale from the same country?","answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""}
]# 4. Create the "example formatter" (how each example is displayed)
example_prompt = PromptTemplate(input_variables=["question", "answer"],template="Question: {question}\n{answer}"  # Format: "Question: X\nAnswer: Y"
)# 5. Create the Example Selector (the key part from Part 2!)
# This finds the example most similar to the user's question
example_selector = SemanticSimilarityExampleSelector.from_examples(examples,  # Our list of examplesOpenAIEmbeddings(),  # Converts text to "meaning numbers" (to check similarity)Chroma,  # Stores these numbers to quickly find matchesk=1  # Pick only the 1 most similar example
)# 6. Create the few-shot prompt template (using the selector)
prompt = FewShotPromptTemplate(example_selector=example_selector,  # Use the selector instead of all examplesexample_prompt=example_prompt,  # How to format the selected examplesuffix="Question: {input}",  # The user's question at the endinput_variables=["input"]  # The user's question is called "input"
)# 7. Define the user's question (let's ask something similar to our examples)
user_question = "Who was the paternal grandfather of George Washington?"# 8. Generate the full prompt (with only the most relevant example)
full_prompt = prompt.format(input=user_question)
print("=== Generated Prompt (with 1 relevant example) ===")
print(full_prompt)
print("\n=== LLM's Answer ===")# 9. Use the LLM to generate a response
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)  # "temperature=0" = consistent answers
response = llm.predict(full_prompt)
print(response)

What You’ll See When You Run It

=== Generated Prompt (with 1 relevant example) ===
Question: Who was the maternal grandfather of George Washington?Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph BallQuestion: Who was the paternal grandfather of George Washington?=== LLM's Answer ===
Are follow up questions needed here: Yes.
Follow up: Who was the father of George Washington?
Intermediate answer: The father of George Washington was Augustine Washington.
Follow up: Who was the father of Augustine Washington?
Intermediate answer: The father of Augustine Washington was Lawrence Washington.
So the final answer is: Lawrence Washington

Simple Explanation

  1. Example Selector: Instead of shoving all 4 examples into the prompt, we use a “smart selector.” It checks which example is most like the user’s question (using “meaning numbers” from the embedding model) and only includes that one. This keeps the prompt short and focused.

  2. How It Works for Our Question: The user asked about George Washington’s paternal grandfather. The selector noticed our 3rd example was about his maternal grandfather (similar topic!) and picked that one.

  3. LLM Follows the Pattern: The LLM sees the selected example and copies its style: first asking follow-ups (“Who was George Washington’s father?”), then finding intermediate answers, and finally giving a clear final answer.

Why This Matters

  • Faster & More Accurate: The LLM doesn’t waste time reading irrelevant examples.
  • Scalable: If you have 100 examples, the selector will still pick the 1 best one—no need to clutter the prompt.

Just replace "your-openai-api-key-here" with your actual API key, and you’re good to go!

http://www.dtcms.com/a/549165.html

相关文章:

  • Spring Al学习3:Prompt
  • 网站优化外链贵州互联网公司
  • 宿迁做网站哪家好做网站要注意哪些
  • 打造属于你的 Telegram 中文版:汉化方案 + @letstgbot 搜索引擎整合教程
  • web--请求响应、分层解耦
  • 做进化树的在线网站东莞软文推广
  • 从零开始的云原生之旅(八):CronJob 实战定时清理任务
  • Python自动化测试 | 快速认识并了解pytest的基本使用
  • 网站备案增加域名天津招聘网人才招聘官网
  • 有什么做外贸的好网站直播网站app下载
  • seo网站改版方案怎么写如何做网站内部优化
  • 找婚庆公司去什么网站亚马逊雨林动物大全
  • 基于百度地铁 API 的长沙地铁站点详情查询与路线导航实践
  • C# 继承
  • Ubuntu 24.04 从源码编译 dcgm-exporter
  • 【软件测试基础】详解数据库核心操作:增删改查,及测试关注点
  • 建网站服务厦门市建设路网站
  • 大模型-多模态机器学习
  • JavaSE基础——第十三章 泛型
  • 从传统到未来:Java在现代开发中的新价值与进化方向
  • 设置linux公钥,私钥登录ssh登录
  • html的网站案例wordpress文章彩色字体
  • set/map刷力扣题/(哈希表+排序类型)仿函数和捕获-两种方法解决
  • 基于单片机与 DeepSeek-OCR 的盲人辅助阅读器设计与实现
  • 淘客网站cms怎么做肥乡专业做网站
  • 【底层机制】Android GC -- 为什么要有GC?GC的核心原理?理解GC的意义
  • 自动驾驶中的传感器技术76——Navigation(13)
  • 鸿蒙Flutter三方库适配指南: 05.使用Windows搭建开发环境
  • 律所网站建设方案书怎么写网站制作排名优化
  • 谷歌网站排名搭建一个平台要多少钱