python调用远程服务器的ollama embedding模型
实际应用中,ollama embedding模型有可能部署在远程服务器。
这里探索通过ollama包和langchain包访问部署在远程服务器的embedding嵌入式模型。
1 ollama
ollama通过host指定服务器ollama部署url,示例程序如下。
from ollama import Client# 服务器ollama部署url: http://server_host:11434
client = Client(host='http://server_host:11434',
)documents = ["Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels","Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands","Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall","Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight","Llamas are vegetarians and have very efficient digestive systems","Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old",
]# store each document in a vector embedding database
for i, d in enumerate(documents):response = client.embed(model="bge-m3", input=d)embeddings = response["embeddings"]print(len(embeddings[0]))
2 langchain
langchain需要通过base_url指定远程服务器ollama部署地址,示例程序如下。
from langchain_ollama import OllamaEmbeddings# 服务器ollama部署url: http://server_host:11434
# ollama embedding模型bge-m3
embeddings = OllamaEmbeddings(base_url="http://server_host:11434", model="bge-m3")contents = ["你好,什么是大模型?", "大模型是什么", "告诉我什么是大模型"]
vectors = embeddings.embed_documents(list([content for content in contents]))
reference
---
Embedding models
https://ollama.com/blog/embedding-models
Ollama Python 使用
https://www.runoob.com/ollama/ollama-python-sdk.html
OllamaEmbeddings
https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.ollama.OllamaEmbeddings.html#langchain_community.embeddings.ollama.OllamaEmbeddings