当前位置: 首页 > news >正文

g4f 0.6.2.9版本安装以及服务不太稳定的问题探究

安装g4f 0.6.2.9版本

安装或者升级g4f到最新版本还是挺简单方便的,执行下面命令即可:

pip install pip -U
pip install g4f[all] -U

当前最新版本是0.6.2.9 

安装完成完成后,ps -aux |grep g4f查找以前版本的进行,kill掉之后,再启动新进程即可:

g4f api  # 启动gui对话和api服务,默认1337端口
g4f gui  # 启动gui对话服务,默认8080端口# debug跟踪调试模式
g4f api --debug # 启动gui对话和api服务,并进入debug跟踪模式
g4f gui --debug # 启动gui对话服务,并进入debug跟踪模式
g4f api --port 8080 --debug # 启动gui对话和api服务,绑定8080端口,并进入debug跟踪模式# 生产环境
nohup g4f api &  # 启动gui对话和api服务,常驻运行
nohup g4f gui &  # 启动gui对话服务,常驻运行

问题调试

问题:每过一段时间,g4f api服务就不太可靠,甚至无法连上,不能用

不知道什么原因,解决的方法一般是升级g4f到最新版本。升级后要调好久....

而且不同的机器,服务稳定性也不同,感觉全靠运气。

问题:发现每次升级到新版本后,都会有一段时间的适应期,在这段时间里无法提供服务。

具体表现为,g4f api启动后,有较长一段时间无法提供服务。 g4f gui启动后,大约有几分钟无法提供服务。且如果是在国内的服务器,则服务更加不稳定。

g4f gui的服务一般等一段时间后一般就能用。但是同样的等待时间,api的服务就连接失败。gui服务器直接用浏览器登录8080端口即可,简单方便,api服务的测试,就需要用curl等方式,或者使用CherryStudio等软件进行测试,在不通的情况下,测试非常耗费时间。

查看debug跟踪信息

window里的debug信息

python -m g4f.cli api --port 1337 --debug
E:\py312\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not workwarn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
Starting server... [g4f v-0.6.2.9] (debug)
INFO:     Started server process [34104]
INFO:     Waiting for application startup.
Read cookies: Loaded env vars from C:\Users\Admin\AppData\Roaming\g4f\cookies/.env
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     192.168.0.98:55510 - "GET /v1 HTTP/1.1" 200 OK
INFO:     192.168.0.98:55510 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO:     192.168.0.98:55510 - "GET /v1/models HTTP/1.1" 200 OK
INFO:     192.168.0.98:55510 - "GET / HTTP/1.1" 200 OK
INFO:     192.168.0.98:55510 - "GET /background.html HTTP/1.1" 200 OK
INFO:     192.168.0.98:55510 - "GET /search/video?skip=2 HTTP/1.1" 404 Not Found
INFO:     192.168.0.98:55511 - "GET /backend-api/v2/version?cache=true HTTP/1.1" 200 OK
INFO:     192.168.0.98:55511 - "GET /search/video?skip=3 HTTP/1.1" 404 Not Found
INFO:     192.168.0.98:55511 - "GET /search/video?skip=4 HTTP/1.1" 404 Not Found
INFO:     192.168.0.98:55511 - "GET /chat/ HTTP/1.1" 200 OK
INFO:     192.168.0.98:55571 - "GET /backend-api/v2/providers HTTP/1.1" 200 OK
INFO:     192.168.0.98:55570 - "GET /backend-api/v2/version HTTP/1.1" 200 OK
INFO:     192.168.0.98:55571 - "GET /backend-api/v2/models/AnyProvider HTTP/1.1" 200 OK
INFO:     192.168.0.98:55571 - "GET /backend-api/v2/chat/2503bad9-0b48-4251-8625-d0be0e35ff36 HTTP/1.1" 404 Not Found
Azure: get_models error: MissingAuthError: Add a "api_key"
BlackboxPro: Returning free model list with 94 models
Cohere: get_models error: JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Custom: get_models error: ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /v1/models (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000023FF1EAF2F0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))
DeepSeek: get_models error: MissingAuthError: Add a "api_key"
FenayAI: get_models error: MissingAuthError: Add a "api_key"
GithubCopilotAPI: get_models error: MissingAuthError: Add a "api_key"
Groq: get_models error: MissingAuthError: Add a "api_key"
INFO:     192.168.0.98:55596 - "POST /backend-api/v2/public-key HTTP/1.1" 200 OK
g4f is up-to-date (version 0.6.2.9).
Using AnyProvider provider and default model
INFO:     192.168.0.98:55596 - "POST /backend-api/v2/conversation HTTP/1.1" 200 OK
AnyProvider: Using providers: ['Blackbox', 'Copilot', 'DeepInfra', 'GLM', 'Kimi', 'PollinationsAI', 'Qwen', 'Mintlify', 'OpenaiChat', 'Cloudflare'] for model ''
Attempting provider: Kimi with model:
Kimi failed: Anonymous chat usage limit exceeded
Attempting provider: GLM with model:
GLM: Using model '0727-360B-API' for alias 'GLM-4.5'
INFO:     192.168.0.98:55596 - "POST /backend-api/v2/usage HTTP/1.1" 200 OK

Linux里的debug信息

g4f api --debug
/mnt/data/work/py312/lib/python3.12/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not workwarn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
Starting server... [g4f v-0.6.2.9] (debug)
INFO:     Started server process [2336006]
INFO:     Waiting for application startup.
Read cookies: Loaded env vars from ./har_and_cookies/.env
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     112.8.96.20:7164 - "GET / HTTP/1.1" 200 OK
INFO:     112.8.96.20:7164 - "GET /background.html HTTP/1.1" 200 OK
INFO:     112.8.96.20:7164 - "GET /backend-api/v2/version?cache=true HTTP/1.1" 200 OK
INFO:     112.8.96.20:7164 - "GET /search/video?skip=2 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7164 - "GET /search/video?skip=3 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7164 - "GET /search/video?skip=4 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7164 - "GET / HTTP/1.1" 304 Not Modified
INFO:     112.8.96.20:7164 - "GET /backend-api/v2/version?cache=true HTTP/1.1" 200 OK
INFO:     112.8.96.20:7163 - "GET /background.html HTTP/1.1" 304 Not Modified
INFO:     112.8.96.20:7163 - "GET /search/video?skip=2 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7163 - "GET /search/video?skip=3 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7163 - "GET /search/video?skip=4 HTTP/1.1" 404 Not Found
INFO:     112.8.96.20:7163 - "GET /chat/ HTTP/1.1" 200 OK
INFO:     112.8.96.20:6817 - "GET /v1 HTTP/1.1" 200 OK
INFO:     112.8.96.20:6817 - "GET /v1/models HTTP/1.1" 200 OK
INFO:     112.8.96.20:6816 - "GET /backend-api/v2/providers HTTP/1.1" 200 OK
INFO:     112.8.96.20:6817 - "GET /backend-api/v2/version HTTP/1.1" 200 OK
INFO:     112.8.96.20:6817 - "GET /backend-api/v2/models/AnyProvider HTTP/1.1" 200 OK
INFO:     112.8.96.20:6817 - "GET /backend-api/v2/chat/33bcfb6f-9e9c-4d19-a44b-a794c5a976ed HTTP/1.1" 404 Not Found
Azure: get_models error: MissingAuthError: Add a "api_key"
BlackboxPro: Returning free model list with 94 models
Cohere: get_models error: JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Custom: get_models error: ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /v1/models (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f09ef879670>: Failed to establish a new connection: [Errno 111] Connection refused'))
DeepSeek: get_models error: MissingAuthError: Add a "api_key"
FenayAI: get_models error: MissingAuthError: Add a "api_key"
GithubCopilotAPI: get_models error: MissingAuthError: Add a "api_key"
Groq: get_models error: MissingAuthError: Add a "api_key"
AnyProvider: Using providers: ['Blackbox', 'OpenaiChat', 'Copilot', 'CopilotAccount', 'PuterJS', 'GithubCopilot', 'OpenRouter'] for model 'gpt-4o'
Attempting provider: CopilotAccount with model: Copilot
Copilot: No .har file found
Browser executable path: None
INFO:     112.8.96.20:6966 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Open nodriver with user_dir: /home/skywalk/.config/g4f-nodriver
CopilotAccount failed: could not find a valid chrome browser binary. please make sure chrome is installed.or use the keyword argument 'browser_executable_path=/path/to/your/browser'
Attempting provider: PuterJS with model: openrouter:openai/gpt-4o
PuterJS failed: Response 401: Authentication failed.
Attempting provider: GithubCopilot with model: gpt-4o
GithubCopilot failed: Response 400: bad request: Authorization header is badly formatted
Attempting provider: Blackbox with model: gpt-4o
Blackbox: Using model 'gpt-4o' for alias 'gpt-4o'
ERROR:g4f.api:
Traceback (most recent call last):File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 520, in wait_forreturn await fut^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/any_provider.py", line 403, in create_async_generatorasync for chunk in RotatedProvider(providers).create_async_generator(File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/retry_provider.py", line 156, in create_async_generatorasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/base_provider.py", line 355, in async_create_functionyield await asyncio.wait_for(^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 520, in wait_forreturn await fut^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/Provider/Blackbox.py", line 214, in create_async_generatorconversation.validated_value = await cls.fetch_validated()^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/Provider/Blackbox.py", line 147, in fetch_validatedasync with session.get(url) as response:^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/client.py", line 1488, in __aenter__self._resp: _RetType = await self._coro^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/client.py", line 770, in _requestresp = await handler(req)^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/client.py", line 725, in _connect_and_send_requestconn = await self._connector.connect(^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/connector.py", line 642, in connectproto = await self._create_connection(req, traces, timeout)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/connector.py", line 1209, in _create_connection_, proto = await self._create_direct_connection(req, traces, timeout)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/connector.py", line 1550, in _create_direct_connectiontransp, proto = await self._wrap_create_connection(^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohttp/connector.py", line 1268, in _wrap_create_connectionsock = await aiohappyeyeballs.start_connection(^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 87, in start_connectionsock, _, _ = await _staggered.staggered_race(^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohappyeyeballs/_staggered.py", line 165, in staggered_racedone = await _wait_one(^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/aiohappyeyeballs/_staggered.py", line 46, in _wait_onereturn await wait_next^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledErrorThe above exception was the direct cause of the following exception:Traceback (most recent call last):File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/api/__init__.py", line 493, in streamingasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/client/__init__.py", line 321, in async_iter_append_model_and_providerasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/client/__init__.py", line 187, in async_iter_responseasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/tools/run_tools.py", line 258, in async_iter_run_toolsasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/asyncio.py", line 73, in to_async_iteratorasync for item in iterator:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/base_provider.py", line 355, in async_create_functionyield await asyncio.wait_for(^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 519, in wait_forasync with timeouts.timeout(timeout):^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/timeouts.py", line 115, in __aexit__raise TimeoutError from exc_val
TimeoutError
HuggingFace: get_models error: ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models?inference=warm&pipeline_tag=text-generation (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f09ef7aa4b0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
AnyProvider: Using providers: ['OpenaiChat', 'PuterJS'] for model 'gpt-4.5'
Attempting provider: PuterJS with model: gpt-4.5-preview
INFO:     112.8.96.20:6956 - "POST /v1/chat/completions HTTP/1.1" 200 OK
PuterJS failed: Response 401: Authentication failed.
Attempting provider: OpenaiChat with model: gpt-4-5
Browser executable path: None
Nodriver: Browser is already in use since 124.8988151550293 secs.

也就是一直超时

AnyProvider: Using providers: ['OIVSCodeSer0501', 'OIVSCodeSer2', 'Blackbox', 'Copilot', 'DeepInfra', 'OperaAria', 'GLM', 'Kimi', 'PollinationsAI', 'Qwen', 'Together', 'Chatai', 'WeWordle', 'Mintlify', 'TeachAnything', 'OpenaiChat', 'Cloudflare'] for model ''
Attempting provider: DeepInfra with model:
INFO:     112.8.96.20:6742 - "POST /v1/chat/completions HTTP/1.1" 200 OK
ERROR:g4f.api:
Traceback (most recent call last):File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 520, in wait_forreturn await fut^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/any_provider.py", line 403, in create_async_generatorasync for chunk in RotatedProvider(providers).create_async_generator(File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/retry_provider.py", line 156, in create_async_generatorasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/base_provider.py", line 355, in async_create_functionyield await asyncio.wait_for(^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 520, in wait_forreturn await fut^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/Provider/template/OpenaiTemplate.py", line 152, in create_async_generatorasync for chunk in read_response(response, stream, prompt, cls.get_dict(), download_media):File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/Provider/template/OpenaiTemplate.py", line 172, in read_responseawait raise_for_status(response)File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/requests/raise_for_status.py", line 27, in raise_for_status_asyncmessage = await response.json()^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/requests/curl_cffi.py", line 57, in jsonreturn json.loads(await self.inner.acontent(), **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/site-packages/curl_cffi/requests/models.py", line 302, in acontentasync for chunk in self.aiter_content():File "/mnt/data/work/py312/lib/python3.12/site-packages/curl_cffi/requests/models.py", line 279, in aiter_contentchunk = await self.queue.get()^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/queues.py", line 158, in getawait getter
asyncio.exceptions.CancelledErrorThe above exception was the direct cause of the following exception:Traceback (most recent call last):File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/api/__init__.py", line 493, in streamingasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/client/__init__.py", line 321, in async_iter_append_model_and_providerasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/client/__init__.py", line 187, in async_iter_responseasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/tools/run_tools.py", line 258, in async_iter_run_toolsasync for chunk in response:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/asyncio.py", line 73, in to_async_iteratorasync for item in iterator:File "/mnt/data/work/py312/lib/python3.12/site-packages/g4f/providers/base_provider.py", line 355, in async_create_functionyield await asyncio.wait_for(^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/tasks.py", line 519, in wait_forasync with timeouts.timeout(timeout):^^^^^^^^^^^^^^^^^^^^^^^^^File "/mnt/data/work/py312/lib/python3.12/asyncio/timeouts.py", line 115, in __aexit__raise TimeoutError from exc_val
TimeoutError

但是同样的,放在国外的服务器就没有问题。

重新关闭服务,打开,国外的也不行了。当然,新打开服务,应该等几分钟再看。等了好久,还是不行。

先打开gui,再打开api试试。

一度国外的打开了,过了一会儿又无法打开了....

用curl测试:

curl -X POST http://192.168.1.5:1337/v1/chat/completions -H "Content-Type: application/json" -d "{ \"model\": \"gpt-4o\", \"prompt\": \"hello\", \"stream\": true }"curl -X POST http://192.168.1.5:8000/v1/chat/completions -H "Content-Type: application/json" -d "{ \"model\": \"gpt-4o\", \"prompt\": \"hello\", \"stream\": true }"

解决问题:让trae写了一个g4f的web server

发现gui可以用,api不能用,证明至少g4f的模型是可以用的。看了一下,发现gui里可以用default模型,于是决定写个g4f的web server,写到g4fserver.py文件里,直接用default模型(以后有机会测试一下api 的default模型)

web server源代码

from fastapi import FastAPI, Request, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from g4f.client import Client
import json
import osapp = FastAPI(title="G4F本地模型中继服务")# 允许跨域请求
app.add_middleware(CORSMiddleware,allow_origins=["*"],allow_credentials=True,allow_methods=["*"],allow_headers=["*"],
)# 初始化G4F客户端
client = Client()@app.post("/v1/chat/completions")
async def chat_completions(request: Request):try:# 1. 解析 OpenAI 格式的请求data = await request.json()print(f"==== 收到请求: {data}")# 提取消息内容messages = data.get("messages", [])if not messages:raise HTTPException(status_code=400, detail="No messages provided")# 2. 使用G4F客户端进行本地模型调用stream = data.get("stream", False)# 构建G4F调用参数g4f_params = {"messages": messages,"temperature": data.get("temperature", 0.7),"max_tokens": data.get("max_tokens", 500)}# 如果指定了模型,则使用指定模型,否则使用默认模型model = data.get("model")if model and model != "default":g4f_params["model"] = model# 3. 调用G4F模型if stream:# 流式响应处理async def generate_stream():try:response = client.chat.completions.create(**g4f_params, stream=True)for chunk in response:if hasattr(chunk, 'choices') and chunk.choices:content = chunk.choices[0].delta.content# 只有当内容不为None时才发送if content is not None:yield f"data: {json.dumps({'choices': [{'delta': {'content': content}}]})}\n\n"# 流结束时发送[DONE]信号yield "data: [DONE]\n\n"except Exception as e:yield f"data: {json.dumps({'error': str(e)})}\n\n"return StreamingResponse(generate_stream(),media_type="text/event-stream",headers={"Cache-Control": "no-cache","Connection": "keep-alive",})else:# 非流式响应response = client.chat.completions.create(**g4f_params)# 转换为OpenAI兼容格式return {"id": "chatcmpl-g4f-local","object": "chat.completion","created": 0,"model": model or "g4f-default","choices": [{"index": 0,"message": {"role": "assistant","content": response.choices[0].message.content},"finish_reason": "stop"}],"usage": {"prompt_tokens": 0,"completion_tokens": 0,"total_tokens": 0}}except Exception as e:raise HTTPException(status_code=500, detail=f"Internal server error: {str(e)}")@app.get("/")
async def root():return {"message": "星河社区大模型中继服务","status": "running","endpoint": "/v1/chat/completions"}@app.get("/health")
async def health_check():return {"status": "healthy"}if __name__ == "__main__":import uvicornuvicorn.run(app, host="0.0.0.0", port=8000)

启动服务

python g4fserver.py

这个服务在8000端口监控,测试了一下,还是可以用的。

在auto-coder里面设置模型:

coding@auto-coder.chat:~$ /models /add_model name=8000default model_name=default base_url=http://127.0.0.1:8000/v1
Successfully added custom model: 8000default
coding@auto-coder.chat:~$ /models /add 8000default hello
Added/Updated model '8000default' successfully.
coding@auto-coder.chat:~$ /conf model:8000default
Configuration updated: model = '8000default'

时隔一个多月,终于auto-coder又可以用了

────────────────────────────────────────────── Starting Agentic Edit: arc ──────────────────────────────────────────────
╭───────────────────────────────────────────────────── Objective ──────────────────────────────────────────────────────╮
│ User Query:                                                                                                          │
│ hello                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯Conversation ID: 20c3fc0b-3b71-4e1f-bec3-885ede9cce56
当前会话总 tokens: 13367
Hello! I'm ready to help you with your software engineering tasks. I have access to various tools for file manipulation, code analysis, command execution, and more. What would you like me to work on today?当前会话总 tokens: 13421
当前会话总 tokens: 13490
Understood! Please let me know what specific task you would like me to assist you with today.当前会话总 tokens: 13522
当前会话总 tokens: 13591
Please specify the task you'd like me to assist with, and I'll take concrete action using the appropriate tools.当前会话总 tokens: 13626
当前会话总 tokens: 13695

问题先临时这样解决。


文章转载自:

http://XeSv8gqk.fsjcn.cn
http://1RqLOLPh.fsjcn.cn
http://gKvsqzBC.fsjcn.cn
http://kRMDAOzM.fsjcn.cn
http://8BBVjcNB.fsjcn.cn
http://mEiHlk2S.fsjcn.cn
http://IunObS9H.fsjcn.cn
http://doDxADOY.fsjcn.cn
http://f1v09C2U.fsjcn.cn
http://61cNQV2F.fsjcn.cn
http://A20mJumb.fsjcn.cn
http://5jEHy6pJ.fsjcn.cn
http://SQa4qMRL.fsjcn.cn
http://odws4A7E.fsjcn.cn
http://smRtYq5t.fsjcn.cn
http://koFdYSLR.fsjcn.cn
http://g1yABVT7.fsjcn.cn
http://J62vyBLa.fsjcn.cn
http://GgQnUgpP.fsjcn.cn
http://wn0m4Ahe.fsjcn.cn
http://256l36tn.fsjcn.cn
http://zSPVDikF.fsjcn.cn
http://IRc7MiP0.fsjcn.cn
http://0RLmBnjU.fsjcn.cn
http://SWizYkFR.fsjcn.cn
http://YIGqcsCe.fsjcn.cn
http://hs8OTiaP.fsjcn.cn
http://WeszrAJk.fsjcn.cn
http://hVSeTQyz.fsjcn.cn
http://9k9cZCYh.fsjcn.cn
http://www.dtcms.com/a/386694.html

相关文章:

  • I2C通信
  • 经典算法题之x 的平方根
  • 【精品资料鉴赏】RPA财务机器人应用(基于UiPath)教材配套课件
  • 融合A*与蚁群算法的室内送餐机器人多目标路径规划方法研究
  • RustDesk:免费开源的跨平台远程桌面控制软件
  • 超越NAT:如何构建高效、安全的内网穿透隧道
  • RabbitMQ理解
  • 【闪电科创】边缘计算深度学习辅导
  • Linux服务器中Mysql定时备份(清理)数据库
  • 物联网智能网关配置教程:实现注塑机数据经基恩士PLC上传至云平台
  • 搭建第一个Spring Boot项目
  • MyBatis 注解操作
  • InternVL3.5 开源:革新多模态架构,重塑感知与推理的边界​
  • 新手教程—LabelImg标注工具使用与YOLO格式转换及数据集划分教程
  • C++奇异递归模板模式(CRTP)
  • 国产数据库地区分布,北京一骑绝尘
  • 超表面赋能结构光三维重建 | 实现超大视场高精度实时重建
  • 在Oracle\PG\GaussDB库中实现用户甲在其它用户的SCHEMA中创建表的方法及所属属主的差异
  • TDengine IDMP 基本功能——数据可视化
  • SpringMVC静态资源与Servlet容器指南
  • 安卓实现miniLzo压缩算法
  • [deepseek]LNK2001错误即单独编译汇编并链接
  • Interview X,新一代面试工具
  • Oracle sql tuning guide 翻译 Part 6 --- 优化器控制
  • Git 原理与使用
  • 什么是向量数据库
  • 利用postgres_proto和pgproto测试postgres协议访问duckdb
  • 拼多多-----anti_content逆向分析
  • 【一文了解】Unity的协程(Coroutine)与线程(Thread)
  • 贪心算法在网络入侵检测(NID)中的应用