当前位置: 首页 > wzjs >正文

国内免费商用图片的网站长沙服务专业的建网站

国内免费商用图片的网站,长沙服务专业的建网站,金融软件网站建设公司,网站建设是不是可以免费建站本文将使用Gemini实现《Learning Langchain》中的PromptTemplate 实现方式,替代书中的调用openai API,白嫖太香了! 调试步骤 我们首先还是先在本地调试是否可以调用Gemini API: import getpass import osif "GOOGLE_API_K…

本文将使用Gemini实现《Learning Langchain》中的PromptTemplate 实现方式,替代书中的调用openai API,白嫖太香了!

调试步骤

我们首先还是先在本地调试是否可以调用Gemini API:

import getpass
import osif "GOOGLE_API_KEY" not in os.environ:os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Google AI API key: ")
import os
import requestsos.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'r = requests.get("https://www.google.com")
print(r.status_code)  # 能返回 200 就说明代理成功了

输出为:

200
from langchain_google_genai import ChatGoogleGenerativeAIllm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001",  # 或其他可用模型
)print(llm.invoke("你好呀!你现在通了吗?").content)

输出为:

你好!我一直都在线,随时准备好为你提供帮助。如果你有任何问题或需要我做什么,请随时告诉我。

对照书上示例1

这里书上给的是Openai的API使用方法:

#Openai API
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.messages import HumanMessage
model = ChatOpenAI()
prompt = [HumanMessage('What is the capital of France?')]
completion = model.invoke(prompt)

输出结果:
AIMessage(content=‘The capital of France is Paris.’)

从这里开始,我将会全部替换为gemini的API调用,代码全部都是可以在本地跑通的!

# Gemini API
from langchain_google_genai import ChatGoogleGenerativeAIllm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001",  # 或其他可用模型
)# prompt = ['What is the capital of France?']
prompt = 'What is the capital of France?' # 两种写法都可以,但是不能用[HumanMessage('What is the capital of France?')],会报错
print(llm.invoke(prompt))

输出为:

content='The capital of France is **Paris**.' additional_kwargs={} response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []} id='run-758b65ec-13af-4cde-a3e7-1aa7991e98ba-0' usage_metadata={'input_tokens': 7, 'output_tokens': 9, 'total_tokens': 16, 'input_token_details': {'cache_read': 0}}
# 还可以写
print(llm.invoke(prompt).content)

输出为:

The capital of France is **Paris**.

我们已经通过简单调用llm.invoke()实现了用gemini回答我们给出的prompt的功能。如果我们想要传递一个详细的prompt呢?比如如下的prompt:

Answer the question based on the context below. If the question cannot be answered using the information provided answer with “I don’t know”.

Context: The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face’s transformers library, or by utilizing OpenAI and Cohere’s offerings through the openai and cohere libraries respectively.

Question: Which model providers offer LLMs?

Answer:

在这个prompt中,Context和Question是固定的,但如果我们想要动态地传递这些值呢?

幸运的是,LangChain提供了prompt template interfaces,可以轻松构建具有动态输入的提示,Python中:

from langchain_core.prompts import PromptTemplatetemplate = PromptTemplate.from_template("""Answer the question based on the context below. If the question cannot be answered using the information provided answer with "I don't know".
Context: {context}
Question: {question}
Answer: """)prompt = template.invoke({"context": "The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face's `transformers` library, or by utilizing OpenAI and Cohere's offerings through the `openai` and `cohere` libraries respectively.","question": "Which model providers offer LLMs?"
})print(prompt)

输出为:

text='Answer the question based on the context below. If the question cannot be answered using the information provided answer with "I don\'t know".\nContext: The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face\'s `transformers` library, or by utilizing OpenAI and Cohere\'s offerings through the `openai` and `cohere` libraries respectively.\nQuestion: Which model providers offer LLMs?\nAnswer: '

这里干了什么?简单解释:

它使用了 LangChain 的 PromptTemplate,可以根据不同的上下文(context)和问题(question)来动态生成提示语(prompt),非常适合给大语言模型(像 ChatGPT)提问。

具体说明:

template = PromptTemplate.from_template("""
Answer the question based on the context below...
Context: {context}
Question: {question}
Answer: """)

这是一个带有“空位”的模板,{context} 和 {question} 是将来要填进去的内容。

prompt = template.invoke({"context": "这里是一些关于 NLP 的信息...","question": "哪些模型提供商提供 LLMs?"
})

用 .invoke() 方法把真实的上下文和问题“填空”进模板中,生成完整的提示。

你可以把 PromptTemplate 想成一张带空格的表格,而 invoke() 就是把空格填上具体内容,最后生成一份完整的问题卡片给 AI 回答。

让我们看看如何使用LangChain将此信息流馈送到LLM模型中:

from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAItemplate = PromptTemplate.from_template("""Answer the question based on the context below. If the question cannot be answered using the information provided answer with "I don't know".
Context: {context}
Question: {question}
Answer: """)prompt = template.invoke({"context": "The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face's `transformers` library, or by utilizing OpenAI and Cohere's offerings through the `openai` and `cohere` libraries respectively.","question": "Which model providers offer LLMs?"
})completion = llm.invoke(prompt)print(completion.content)

输出为:

OpenAI and Cohere

如果您想构建一个 AI 聊天应用程序,则可以使用聊天提示模板,根据聊天消息的角色提供动态输入。首先使用 Python:

from langchain_core.prompts import ChatPromptTemplatetemplate = ChatPromptTemplate.from_messages([('system', 'Answer the question based on the context below. If the question cannot be answered using the information provided answer with "I don\'t know".'),('human', 'Context: {context}'),('human', 'Question: {question}'),
])prompt = template.invoke({"context": "The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face's `transformers` library, or by utilizing OpenAI and Cohere's offerings through the `openai` and `cohere` libraries respectively.","question": "Which model providers offer LLMs?"
})completion = llm.invoke(prompt)print(completion.content)

输出为:

OpenAI and Cohere are mentioned as LLM providers.

很直观的感受就是:.invoke()方法真的就像是在填空!

首先把你输入的prompt用template.invoke()填充到template模板里,然后再调用llm.invoke(prompt)让llm来为prompt填空。

但这段代码我们注意到它的模板类型不再是PromptTemplate.from_template()而是ChatPromptTemplate.from_messages(),这是专为多轮对话(聊天格式)设计,并且我们在模板里明确指定了system和human,定义了不同的角色。

我们来看看与上文模板的区别:

项目第一段代码 (PromptTemplate)这段代码 (ChatPromptTemplate)
模板类型适用于单条文本提示专为**多轮对话(聊天格式)**设计
定义方式.from_template().from_messages()
消息角色明确指定了 systemhuman
格式一整段字符串,插值变量 {}多条消息组成,每条可以有不同角色

解释一下这段代码做了什么:

from langchain_core.prompts import ChatPromptTemplatetemplate = ChatPromptTemplate.from_messages([('system', 'Answer the question based on the context below...'),('human', 'Context: {context}'),('human', 'Question: {question}'),
])

这里定义了一个对话格式的模板,有两种角色:

system:系统消息,设置AI行为,比如“根据上下文回答问题,不知道就说不知道”。

human:人类用户的消息,这里有上下文和问题两条信息。

然后用 .invoke({…}) 插入实际内容:

prompt = template.invoke({"context": "...","question": "Which model providers offer LLMs?"
})

最终会生成一组 聊天消息格式的提示,适合用于支持对话结构的模型(比如 OpenAI 的 chat 模型接口)。

总结一句话:

PromptTemplate:更适合一段完整的文字提示。

ChatPromptTemplate:更适合多轮对话式模型,结构更清晰,功能更强大。

我们在这个新的模板基础上开始实际调用大语言模型(LLM)进行预测:

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate# both `template` and `model` can be reused many times
template = ChatPromptTemplate.from_messages([('system', 'Answer the question based on the context below. If the question cannot be answered using the information provided answer with "I don\'t know".'),('human', 'Context: {context}'),('human', 'Question: {question}'),
])llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001",  # 或其他可用模型
)# `prompt` and `completion` are the results of using template and model once
prompt = template.invoke({"context": "The most recent advancements in NLP are being driven by Large Language Models (LLMs). These models outperform their smaller counterparts and have become invaluable for developers who are creating applications with NLP capabilities. Developers can tap into these models through Hugging Face's `transformers` library, or by utilizing OpenAI and Cohere's offerings through the `openai` and `cohere` libraries respectively.","question": "Which model providers offer LLMs?"
})completion = llm.invoke(prompt)print(completion)

输出为:

content='Based on the context, OpenAI and Cohere offer LLMs.' additional_kwargs={} response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []} id='run-d6e9e476-69c6-43d6-90e5-bfcb3553c9bc-0' usage_metadata={'input_tokens': 118, 'output_tokens': 15, 'total_tokens': 133, 'input_token_details': {'cache_read': 0}}

总结这段代码做了什么:

步骤做了什么
1. 创建模板ChatPromptTemplate 定义多轮对话结构
2. 插入内容.invoke() 方法填入具体 context 和 question
3. 调用模型ChatOpenAI() 模型接收 prompt,并返回回答

这就是调用ChatPromptTemplate的全部内容,欢迎点赞收藏!

http://www.dtcms.com/wzjs/577255.html

相关文章:

  • 网站点赞怎么做的安阳信息网官网
  • 迅雷黄冈网站推广软件成都设计公司招聘
  • 印尼做网站的教学 中文销售的技巧与口才
  • 安阳网站建设优化渠道软件开发案例
  • 怎么把自己做的网页上传网站wordpress公众号
  • 网站改版方案模板wordpress折叠
  • 创业项目的网站wordpress+留言本
  • 网站开发公司可行报告基础展示营销型型网站
  • 织梦做网站简单吗一个静态网站开发考虑什么
  • 网站开发php怎么样网站建设与管理教学视频
  • 建设部网站信息系统客栈网站建设
  • 欧美一级a做爰片免费网站自学套模板做网站
  • 移动电商网站设计豫icp郑州网站建设
  • 网站一个一个关键词做大学生建设什么网站好
  • 做外贸经常用的网站唐朝网站的地址
  • 成都网站建设公司好做吗网站建设流程包括哪些环节
  • 免费 网站点击1688阿里巴巴官网首页
  • 织梦cms如何搭建网站建筑招标信息网官网
  • 网站设计专业公司价格信息分类平台
  • 网站核验单下载wordpress 图片多
  • 义乌网站建设和制作浙江平板网站建设
  • 实搜石家庄网站建设小程序做小型企业网站多少钱
  • 网站建设公司海外赣州公司做网站
  • 绵阳科技网站建设磐安县建设局网站
  • 建网站相关知识购买域名是什么意思
  • 乐都区公司网站建设最近免费高清版电影在线观看
  • 延安网站优化驻马店做网站优化
  • 怎么做网站教程 用的工具网站设计与运营
  • 专业网站建设公司哪家专业网络公司排名中国科技企业排行榜
  • 推广自己的网站需要怎么做wordpress追加表