【部署】手搓一个dify可用的rerank模型接口服务
回到目录
【部署】手搓一个dify可用的rerank模型接口服务
本地知识库有保密需要,不想上传互联网模型进行rerank处理。但是,vllm部署rerank模型硬件要求高,ollma不支持rerank模型,因此只能考虑按照openai的接口标准写一个rerank的接口程序
1. 安装必要的python基础环境
$ pip install uv -i https://pypi.tuna.tsinghua.edu.cn/simple$ uv pip install torch -i https://pypi.tuna.tsinghua.edu.cn/simple下载量比较大,耗时有点长$ uv pip install modelscope -i https://pypi.tuna.tsinghua.edu.cn/simple$ uv pip install "modelscope[multi-modal]" -i https://pypi.tuna.tsinghua.edu.cn/simple$ uv pip install fastapi uvicorn -i https://pypi.tuna.tsinghua.edu.cn/simple
2. 测试一下基础环境是否工作正常
$ uv run python -c “from modelscope.pipelines import pipeline;print(pipeline(‘word-segmentation’)(‘今天天气不错,适合 出去游玩’))”
3. 用模型介绍页的案例测试score打分效果
# test_modelscope.pyimport torch
from modelscope import AutoModelForSequenceClassification, AutoTokenizertokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)scores = model(**inputs, return_dict=True).logits.view(-1, ).float()print(scores)
$ uv run test_modelscope.py
tensor([-5.6085, 5.7623])
启动程序,第一个问题 低分表示问题和答案关联度不高; 第二个问题 高分表示关联度高
4. (本文重点)用fastapi封装一个openai接口标准的rerank模型
# app.pyfrom fastapi import FastAPI
from pydantic import BaseModel
from typing import List, Optional# 假设你已经定义了 RerankAPI 类
from rerank_module import RerankAPI # 你可以根据实际情况调整 import 路径app = FastAPI()
reranker = RerankAPI()class RerankRequest(BaseModel):query: strdocuments: List[str]top_n: Optional[int] = Nonereturn_documents: bool = True@app.post("/rerank")
def api_rerank(request: RerankRequest):result = reranker.rerank(request.query,request.documents,request.top_n,request.return_documents)return resultif __name__ == "__main__":import uvicornuvicorn.run(app, host="0.0.0.0", port=8000)
上面代码开启8000端口,并调用 rerank_module.py的 rerank() 提供接口服务
# rerank_module.pyimport torch
from modelscope import AutoModelForSequenceClassification, AutoTokenizerclass RerankAPI:def __init__(self, model_name='BAAI/bge-reranker-large'):self.tokenizer = AutoTokenizer.from_pretrained(model_name)self.model = AutoModelForSequenceClassification.from_pretrained(model_name)self.model.eval()def rerank(self, query: str, documents: list, top_n: int = None, return_documents: bool = True):"""对 query 和 documents 进行 rerank,并返回排序后的结果。:param query: 用户查询语句:param documents: 文档列表(字符串数组):param top_n: 返回前 n 个最相关的结果(None 表示全部):param return_documents: 是否在返回中包含原始文档内容:return: 包含 rerank 结果的字典列表,包括 index、document(可选)、relevance_score"""pairs = [[query, doc] for doc in documents]with torch.no_grad():inputs = self.tokenizer(pairs,padding=True,truncation=True,max_length=512,return_tensors='pt')scores = self.model(**inputs).logits.view(-1).float().cpu().numpy()# 构造输出格式results = []for idx, (doc, score) in enumerate(zip(documents, scores)):result = {"index": idx,"relevance_score": float(score)}if return_documents:result["document"] = docresults.append(result)# 按照得分降序排序results.sort(key=lambda x: x["relevance_score"], reverse=True)# 截取 top_nif top_n is not None:results = results[:top_n]return {"results": results}
上面代码按照openai的接口标准,提供rerank排序结果
5. 启动
$ uv run app.py
INFO: Started server process [344887]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
启动正常,8000端口提供服务
[图3]
fastapi的docs接口显示正常
[图4]
调用/rerank接口服务,返回正常
6. 在dify前台配置好rerank模型,后续就可以愉快地使用了
[图1] 增加 openai-api-compatible插件
[图2]配置rerank模型
[图5]配置到系统模型
本文完毕
回到目录