DeepSeek 协程API 调用与 vllm推理,llamafactory本地vllm部署
文章目录
- 简介
- 代码实战
- 调用官方API
- 协程异步调用
- 异步协程 方法二
- vllm_infer
简介
使用协程调用DeepSeek的API,发现效果并不明显,没有加速的效果。
但如是本地部署DeepSeek,本地部署需要支持异步调用,我使用 llamafactory 部署,发现协程加速的效果还是很显著的。
代码实战
调用官方API
DeepSeek官方文档 https://api-docs.deepseek.com/zh-cn/
python 的调用代码如下,该调用方式为同步调用速度很慢。
# Please install OpenAI SDK first: `pip3 install openai`
from openai import OpenAI
client = OpenAI(api_key="<DeepSeek API Key>", base_url="https://api.deepseek.com")
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hello"},
],
stream=False
)
print(response.choices[0].message.content)
import os
from tqdm import tqdm
from dotenv import load_dotenv
# 加载 .env 文件的密钥
load_dotenv()
api_key = os.getenv("deepseek_api")
queries = [
"What is AI?",
"How does deep learning work?",
"Explain reinforcement learning.",
"人工智能的应用领域有哪些?",
"大模型是如何进行预训练的?",
"什么是自监督学习,它有哪些优势?",
"Transformer 结构的核心组件是什么?",
"GPT 系列模型是如何生成文本的?",
"强化学习在游戏 AI 中的应用有哪些?",
"目前 AI 领域面临的主要挑战是什么?"
]
answer1 = []
for query in tqdm(queries):
# 官方提供的API调用方式
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hello"},
],
stream=False,
)
content = response.choices[0].message.content
answer1.append(content)
为了防止在分享代码的时候,导致 API Key 泄露,我把key保存到 .env 文件中,通过load_dotenv
加载密钥。
协程异步调用
import asyncio
from typing import List
# from langchain.chat_models import ChatOpenAI
from langchain_openai import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage
# 初始化模型
llm = ChatOpenAI(
model_name="deepseek-chat",
# model_name="deepseek-reasoner",
openai_api_key=api_key,
openai_api_base="https://api.deepseek.com/v1",
)
async def call_deepseek_async(query: str, progress) -> str:
messages = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=query),
]
response = await llm.ainvoke(messages)
progress.update(1)
return response.content
async def batch_call_deepseek(queries: List[str], concurrency: int = 5) -> List[str]:
semaphore = asyncio.Semaphore(concurrency)
progress_bar = tqdm(total=len(queries), desc="Async:")
async def limited_call(query: str):
async with semaphore:
return await call_deepseek_async(query, progress_bar)
tasks = [limited_call(query) for query in queries]
return await asyncio.gather(*tasks)
# for python script
# responses = asyncio.run(batch_call_deepseek(queries, concurrency=10))
# for jupyter
response = await batch_call_deepseek(queries, concurrency=10)
注意:异步调用需要使用 await 等待。
下述是tqdm 另外的一种,协程进度条的写法:
from tqdm.asyncio import tqdm_asyncio
results = await tqdm_asyncio.gather(*tasks)
上述的异步协程代码,我调用DeepSeek的API,没有加速效果,我怀疑官方进行了限速。
我使用本地llamafactory部署的DeepSeek,上述异步协程的效果加速明显。
llamafactory vllm本地部署 deepseek的脚本,只支持 linux 系统。
deepseek_7B.yaml
文件内容:
model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
template: deepseek3
infer_backend: vllm
vllm_enforce_eager: true
trust_remote_code: true
linux 部署脚本:
nohup llamafactory-cli api deepseek_7B.yaml > deepseek_7B.log 2>&1 &
异步协程 方法二
下述是 ChatGPT 生成的另外一种异步协程写法。
(下述方法我没有在本地部署的API上测试过,仅供大家参考)
import asyncio
from tqdm.asyncio import tqdm_asyncio
answer = []
async def fetch(query):
response = await client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": query},
],
stream=False,
)
return response.choices[0].message.content
async def main():
tasks = [fetch(query) for query in queries]
results = await tqdm_asyncio.gather(*tasks)
answer.extend(results)
asyncio.run(main())
vllm_infer
如果你是linux系统,那么相比API调用,最快的方式就是vllm推理。
你需要使用下述脚本,
https://github.com/hiyouga/LLaMA-Factory/blob/main/scripts/vllm_infer.py
python vllm_infer.py \
--model_name_or_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \
--template deepseek3 \
--dataset industry_cls \
--dataset_dir ../../data/llamafactory_dataset/ \
--save_name output/generated_predictions.jsonl
llamafactory 可以指定自定义的数据集地址,你需要构建相应格式的数据集文件。
数据集文件夹下的文件: