投资厦门seo测试
vLLM 是一款专为大语言模型推理加速而设计的框架,实现了 KV 缓存内存几乎零浪费,解决了内存管理瓶颈问题。
更多 vLLM 中文文档及教程可访问 →https://vllm.hyper.ai/
源代码:vllm-project/vllm
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""import argparse
import json
from typing import Iterable, Listimport requestsdef clear_line(n: int = 1) -> None:LINE_UP = '\033[1A'LINE_CLEAR = '\x1b[2K'for _ in range(n):print(LINE_UP, end=LINE_CLEAR, flush=True)def post_http_request(prompt: str,api_url: str,n: int = 1,stream: bool = False) -> requests.Response:headers = {"User-Agent": "Test Client"}pload = {"prompt": prompt,"n": n,"use_beam_search": True,"temperature": 0.0,"max_tokens": 16,"stream": stream,}response = requests.post(api_url,headers=headers,json=pload,stream=stream)return responsedef get_streaming_response(response: requests.Response) -> Iterable[List[str]]:for chunk in response.iter_lines(chunk_size=8192,decode_unicode=False,delimiter=b"\0"):if chunk:data = json.loads(chunk.decode("utf-8"))output = data["text"]yield outputdef get_response(response: requests.Response) -> List[str]:data = json.loads(response.content)output = data["text"]return outputif __name__ == "__main__":parser = argparse.ArgumentParser()parser.add_argument("--host", type=str, default="localhost")parser.add_argument("--port", type=int, default=8000)parser.add_argument("--n", type=int, default=4)parser.add_argument("--prompt", type=str, default="San Francisco is a")parser.add_argument("--stream", action="store_true")args = parser.parse_args()prompt = args.promptapi_url = f"http://{args.host}:{args.port}/generate"n = args.nstream = args.streamprint(f"Prompt: {prompt!r}\n", flush=True)response = post_http_request(prompt, api_url, n, stream)if stream:num_printed_lines = 0for h in get_streaming_response(response):clear_line(num_printed_lines)num_printed_lines = 0for i, line in enumerate(h):num_printed_lines += 1print(f"Beam candidate {i}: {line!r}", flush=True)else:output = get_response(response)for i, line in enumerate(output):print(f"Beam candidate {i}: {line!r}", flush=True)源代码:vllm-project/vllm
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""import argparse
import json
from typing import Iterable, Listimport requestsdef clear_line(n: int = 1) -> None:LINE_UP = '\033[1A'LINE_CLEAR = '\x1b[2K'for _ in range(n):print(LINE_UP, end=LINE_CLEAR, flush=True)def post_http_request(prompt: str,api_url: str,n: int = 1,stream: bool = False) -> requests.Response:headers = {"User-Agent": "Test Client"}pload = {"prompt": prompt,"n": n,"use_beam_search": True,"temperature": 0.0,"max_tokens": 16,"stream": stream,}response = requests.post(api_url,headers=headers,json=pload,stream=stream)return responsedef get_streaming_response(response: requests.Response) -> Iterable[List[str]]:for chunk in response.iter_lines(chunk_size=8192,decode_unicode=False,delimiter=b"\0"):if chunk:data = json.loads(chunk.decode("utf-8"))output = data["text"]yield outputdef get_response(response: requests.Response) -> List[str]:data = json.loads(response.content)output = data["text"]return outputif __name__ == "__main__":parser = argparse.ArgumentParser()parser.add_argument("--host", type=str, default="localhost")parser.add_argument("--port", type=int, default=8000)parser.add_argument("--n", type=int, default=4)parser.add_argument("--prompt", type=str, default="San Francisco is a")parser.add_argument("--stream", action="store_true")args = parser.parse_args()prompt = args.promptapi_url = f"http://{args.host}:{args.port}/generate"n = args.nstream = args.streamprint(f"Prompt: {prompt!r}\n", flush=True)response = post_http_request(prompt, api_url, n, stream)if stream:num_printed_lines = 0for h in get_streaming_response(response):clear_line(num_printed_lines)num_printed_lines = 0for i, line in enumerate(h):num_printed_lines += 1print(f"Beam candidate {i}: {line!r}", flush=True)else:output = get_response(response)for i, line in enumerate(output):print(f"Beam candidate {i}: {line!r}", flush=True)