当前位置: 首页 > news >正文

1.1 大语言模型调用方式与函数调用(Function Calling):从基础到实战

大语言模型调用方式与函数调用(Function Calling):从基础到实战

引言

在当今AI技术飞速发展的时代,大语言模型(LLM)已成为各行各业智能化转型的核心驱动力。然而,单纯的语言生成能力往往无法满足复杂的业务需求,这时函数调用(Function Calling)技术就显得尤为重要。本文将深入探讨主流大语言模型的调用方式,特别是函数调用技术的原理与实践,帮助开发者构建更加智能和实用的AI应用。

什么是函数调用?

函数调用是大语言模型与外部世界交互的桥梁,它允许模型在生成文本的同时,识别需要调用外部工具或API的场景,并按照预定格式输出调用参数。这种技术让LLM不再仅仅是文本生成器,而是成为了能够执行复杂任务的智能代理。

OpenAI Function Calling 原理与实战

JSON Schema 定义工具参数结构

在OpenAI的Function Calling体系中,JSON Schema扮演着关键角色。它定义了函数参数的规范结构,确保模型输出的参数符合预期格式。

import openai
import json
from typing import List, Dict, Any# 配置OpenAI客户端
client = openai.OpenAI(api_key="your-api-key")# 定义工具的参数结构
def get_weather_tool():"""定义天气查询工具的参数结构"""return {"type": "function","function": {"name": "get_weather","description": "获取指定城市的天气信息","parameters": {"type": "object","properties": {"location": {"type": "string","description": "城市名称,如:北京、上海"},"unit": {"type": "string","enum": ["celsius", "fahrenheit"],"description": "温度单位,摄氏度或华氏度"}},"required": ["location"],"additionalProperties": False}}}def search_database_tool():"""定义数据库查询工具的参数结构"""return {"type": "function","function": {"name": "search_database","description": "在数据库中查询相关信息","parameters": {"type": "object","properties": {"query": {"type": "string","description": "搜索查询语句"},"table": {"type": "string","description": "要查询的数据表名称"},"limit": {"type": "integer","description": "返回结果数量限制","default": 10}},"required": ["query", "table"],"additionalProperties": False}}}

Tool Call + Response Chain 流程

函数调用的核心流程可以概括为以下几个步骤:

用户输入
LLM处理
是否需要调用工具?
生成工具调用参数
直接生成回复
执行工具调用
获取工具执行结果
将结果返回给LLM
生成最终回复
返回回复给用户

下面是完整的实现代码:

class FunctionCallingAgent:def __init__(self, api_key: str):"""初始化函数调用代理Args:api_key: OpenAI API密钥"""self.client = openai.OpenAI(api_key=api_key)self.conversation_history = []def add_message(self, role: str, content: str):"""添加对话消息到历史记录Args:role: 消息角色(user/assistant)content: 消息内容"""self.conversation_history.append({"role": role, "content": content})def get_weather(self, location: str, unit: str = "celsius") -> str:"""模拟天气查询函数Args:location: 城市名称unit: 温度单位Returns:str: 天气信息"""# 在实际应用中,这里会调用真实的天气API# 这里使用模拟数据weather_data = {"北京": {"temperature": 25, "condition": "晴朗", "humidity": 40},"上海": {"temperature": 28, "condition": "多云", "humidity": 65},"广州": {"temperature": 32, "condition": "阵雨", "humidity": 80}}if location in weather_data:data = weather_data[location]temp = data["temperature"]if unit == "fahrenheit":temp = temp * 9/5 + 32return f"{location}的天气:温度{temp}°{unit[0].upper()}{data['condition']},湿度{data['humidity']}%"else:return f"未找到{city}的天气信息"def search_database(self, query: str, table: str, limit: int = 10) -> str:"""模拟数据库查询函数Args:query: 查询语句table: 数据表名称limit: 结果数量限制Returns:str: 查询结果"""# 模拟数据库查询结果sample_data = {"users": [{"id": 1, "name": "张三", "email": "zhangsan@example.com"},{"id": 2, "name": "李四", "email": "lisi@example.com"},{"id": 3, "name": "王五", "email": "wangwu@example.com"}],"products": [{"id": 1, "name": "笔记本电脑", "price": 5999},{"id": 2, "name": "智能手机", "price": 3999},{"id": 3, "name": "平板电脑", "price": 2999}]}if table in sample_data:results = sample_data[table][:limit]return f"在{table}表中查询'{query}'的结果:{json.dumps(results, ensure_ascii=False)}"else:return f"未找到表{table}"def execute_tool_call(self, tool_name: str, arguments: Dict[str, Any]) -> str:"""执行工具调用Args:tool_name: 工具名称arguments: 工具参数Returns:str: 工具执行结果"""try:if tool_name == "get_weather":return self.get_weather(**arguments)elif tool_name == "search_database":return self.search_database(**arguments)else:return f"未知工具:{tool_name}"except Exception as e:return f"工具执行错误:{str(e)}"def process_user_input(self, user_input: str) -> str:"""处理用户输入,可能涉及函数调用Args:user_input: 用户输入文本Returns:str: 最终回复"""# 添加用户消息到历史记录self.add_message("user", user_input)# 准备可用的工具tools = [get_weather_tool(), search_database_tool()]# 第一次调用:判断是否需要调用工具first_response = self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history,tools=tools,tool_choice="auto")response_message = first_response.choices[0].messagetool_calls = response_message.tool_calls# 添加助手的第一轮回复到历史记录self.conversation_history.append({"role": "assistant","content": response_message.content,"tool_calls": [{"id": tool_call.id,"type": "function","function": {"name": tool_call.function.name,"arguments": tool_call.function.arguments}} for tool_call in tool_calls] if tool_calls else None})# 如果有工具调用,执行工具并获取结果if tool_calls:for tool_call in tool_calls:function_name = tool_call.function.namefunction_args = json.loads(tool_call.function.arguments)# 执行工具调用tool_result = self.execute_tool_call(function_name, function_args)# 添加工具执行结果到历史记录self.conversation_history.append({"role": "tool","tool_call_id": tool_call.id,"content": tool_result})# 第二次调用:基于工具执行结果生成最终回复second_response = self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history)final_message = second_response.choices[0].message.contentself.add_message("assistant", final_message)return final_messageelse:# 没有工具调用,直接返回回复final_message = response_message.contentself.add_message("assistant", final_message)return final_message# 使用示例
def main():# 初始化代理(在实际使用中替换为真实的API密钥)agent = FunctionCallingAgent(api_key="your-api-key-here")# 测试用例test_cases = ["今天北京的天气怎么样?","查询用户表中的所有用户","帮我查一下上海的温度,用华氏度显示","产品表中有哪些商品?"]for case in test_cases:print(f"用户: {case}")response = agent.process_user_input(case)print(f"助手: {response}")print("-" * 50)if __name__ == "__main__":main()

高级功能:支持多个工具同时调用

在实际应用中,用户的一个请求可能需要调用多个工具。下面是支持并行工具调用的增强版本:

class AdvancedFunctionCallingAgent(FunctionCallingAgent):def __init__(self, api_key: str):super().__init__(api_key)# 注册更多工具self.available_tools = {"get_weather": {"function": self.get_weather,"schema": get_weather_tool()},"search_database": {"function": self.search_database,"schema": search_database_tool()},"calculate": {"function": self.calculate,"schema": self.get_calculate_tool_schema()}}def get_calculate_tool_schema(self):"""定义计算工具的参数结构"""return {"type": "function","function": {"name": "calculate","description": "执行数学计算","parameters": {"type": "object","properties": {"expression": {"type": "string","description": "数学表达式,如:2 + 3 * 4"}},"required": ["expression"],"additionalProperties": False}}}def calculate(self, expression: str) -> str:"""执行数学计算Args:expression: 数学表达式Returns:str: 计算结果"""try:# 安全地执行数学表达式result = eval(expression)return f"计算结果:{expression} = {result}"except Exception as e:return f"计算错误:{str(e)}"def process_complex_request(self, user_input: str) -> str:"""处理复杂请求,支持多个工具调用Args:user_input: 用户输入Returns:str: 最终回复"""self.add_message("user", user_input)# 获取所有可用工具的模式tools = [tool["schema"] for tool in self.available_tools.values()]# 第一次调用first_response = self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history,tools=tools,tool_choice="auto")response_message = first_response.choices[0].messagetool_calls = response_message.tool_calls# 记录助手回复assistant_message = {"role": "assistant","content": response_message.content}if tool_calls:assistant_message["tool_calls"] = []for tool_call in tool_calls:assistant_message["tool_calls"].append({"id": tool_call.id,"type": "function","function": {"name": tool_call.function.name,"arguments": tool_call.function.arguments}})self.conversation_history.append(assistant_message)# 执行所有工具调用if tool_calls:tool_results = []for tool_call in tool_calls:function_name = tool_call.function.namefunction_args = json.loads(tool_call.function.arguments)if function_name in self.available_tools:tool_function = self.available_tools[function_name]["function"]try:result = tool_function(**function_args)tool_results.append({"tool_call_id": tool_call.id,"result": result})except Exception as e:tool_results.append({"tool_call_id": tool_call.id,"result": f"错误:{str(e)}"})else:tool_results.append({"tool_call_id": tool_call.id,"result": f"未知工具:{function_name}"})# 添加工具执行结果for result in tool_results:self.conversation_history.append({"role": "tool","tool_call_id": result["tool_call_id"],"content": result["result"]})# 第二次调用生成最终回复second_response = self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history)final_message = second_response.choices[0].message.contentself.add_message("assistant", final_message)return final_messageelse:final_message = response_message.contentself.add_message("assistant", final_message)return final_message# 测试复杂请求
def test_advanced_agent():agent = AdvancedFunctionCallingAgent(api_key="your-api-key-here")complex_requests = ["先查一下北京的天气,然后计算一下(25 + 15) * 2等于多少","查询用户表的前2个用户,然后告诉我今天上海的天气"]for request in complex_requests:print(f"用户: {request}")response = agent.process_complex_request(request)print(f"助手: {response}")print("=" * 80)if __name__ == "__main__":test_advanced_agent()

本地/私有化部署调用方式对比

除了OpenAI的API服务,在实际企业应用中,我们经常需要在本地或私有化环境中部署大语言模型。以下是主流方案的对比:

HuggingFace Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM
import torchclass HuggingFaceLocalModel:def __init__(self, model_name: str):"""初始化本地HuggingFace模型Args:model_name: 模型名称或路径"""self.device = "cuda" if torch.cuda.is_available() else "cpu"print(f"使用设备: {self.device}")# 加载tokenizer和模型self.tokenizer = AutoTokenizer.from_pretrained(model_name)self.model = AutoModelForCausalLM.from_pretrained(model_name,torch_dtype=torch.float16 if self.device == "cuda" else torch.float32,device_map="auto")# 如果tokenizer没有pad_token,设置为eos_tokenif self.tokenizer.pad_token is None:self.tokenizer.pad_token = self.tokenizer.eos_tokendef generate_response(self, prompt: str, max_length: int = 512) -> str:"""生成回复Args:prompt: 输入提示max_length: 最大生成长度Returns:str: 生成的回复"""# 编码输入inputs = self.tokenizer.encode(prompt, return_tensors="pt").to(self.device)# 生成回复with torch.no_grad():outputs = self.model.generate(inputs,max_length=max_length,num_return_sequences=1,temperature=0.7,do_sample=True,pad_token_id=self.tokenizer.eos_token_id)# 解码回复response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)return response[len(prompt):]  # 返回新生成的部分# 使用示例
def test_huggingface_model():# 使用一个较小的模型进行测试model = HuggingFaceLocalModel("microsoft/DialoGPT-medium")prompt = "你好,请问今天天气怎么样?"response = model.generate_response(prompt)print(f"提示: {prompt}")print(f"回复: {response}")if __name__ == "__main__":test_huggingface_model()

vLLM 高性能推理引擎

vLLM是一个专门为LLM设计的高性能推理引擎,特别适合生产环境部署:

from vllm import LLM, SamplingParams
import asyncioclass VLLMEngine:def __init__(self, model_name: str):"""初始化vLLM引擎Args:model_name: 模型名称或路径"""self.llm = LLM(model=model_name,tensor_parallel_size=1,  # GPU数量gpu_memory_utilization=0.9,max_model_len=2048)def batch_generate(self, prompts: list, max_tokens: int = 256) -> list:"""批量生成回复Args:prompts: 提示列表max_tokens: 最大token数Returns:list: 回复列表"""sampling_params = SamplingParams(temperature=0.7,top_p=0.9,max_tokens=max_tokens,stop_token_ids=[],  # 可以设置停止token)outputs = self.llm.generate(prompts, sampling_params)responses = []for output in outputs:generated_text = output.outputs[0].textresponses.append(generated_text)return responses# 使用示例
def test_vllm_engine():# 注意:需要先安装vLLM并确保有合适的GPU# pip install vllmengine = VLLMEngine("facebook/opt-125m")  # 使用小模型测试prompts = ["解释一下人工智能的概念:","如何学习编程?","今天的天气真好,"]responses = engine.batch_generate(prompts)for prompt, response in zip(prompts, responses):print(f"提示: {prompt}")print(f"回复: {response}")print("-" * 50)if __name__ == "__main__":test_vllm_engine()

Ollama 本地模型部署

Ollama是一个简单易用的本地LLM部署工具,支持多种开源模型:

import requests
import jsonclass OllamaClient:def __init__(self, base_url: str = "http://localhost:11434"):"""初始化Ollama客户端Args:base_url: Ollama服务地址"""self.base_url = base_urlself.session = requests.Session()def list_models(self) -> list:"""获取可用的模型列表"""try:response = self.session.get(f"{self.base_url}/api/tags")if response.status_code == 200:data = response.json()return data.get("models", [])else:print(f"获取模型列表失败: {response.status_code}")return []except Exception as e:print(f"连接Ollama服务失败: {e}")return []def generate_response(self, model: str, prompt: str, system_prompt: str = None, **kwargs) -> str:"""生成回复Args:model: 模型名称prompt: 用户提示system_prompt: 系统提示**kwargs: 其他参数Returns:str: 生成的回复"""data = {"model": model,"prompt": prompt,"stream": False,"options": {"temperature": kwargs.get("temperature", 0.7),"top_p": kwargs.get("top_p", 0.9),"top_k": kwargs.get("top_k", 40),"num_predict": kwargs.get("max_tokens", 512)}}if system_prompt:data["system"] = system_prompttry:response = self.session.post(f"{self.base_url}/api/generate",json=data,timeout=60)if response.status_code == 200:result = response.json()return result.get("response", "")else:return f"请求失败: {response.status_code}"except Exception as e:return f"生成回复时出错: {str(e)}"def chat_completion(self, model: str, messages: list, **kwargs) -> str:"""聊天补全接口(类似OpenAI格式)Args:model: 模型名称messages: 消息列表**kwargs: 其他参数Returns:str: 助手回复"""# 将消息列表转换为promptprompt = ""for msg in messages:role = msg["role"]content = msg["content"]if role == "system":prompt = f"系统: {content}\n\n" + promptelif role == "user":prompt += f"用户: {content}\n"elif role == "assistant":prompt += f"助手: {content}\n"prompt += "助手: "return self.generate_response(model, prompt, **kwargs)# 使用示例
def test_ollama_client():client = OllamaClient()# 检查可用模型models = client.list_models()print("可用模型:", [model["name"] for model in models])if models:# 使用第一个模型进行测试model_name = models[0]["name"]# 简单生成response = client.generate_response(model_name,"请用中文解释机器学习的概念")print(f"回复: {response}")# 聊天格式messages = [{"role": "system", "content": "你是一个有帮助的助手"},{"role": "user", "content": "你好,请介绍一下你自己"}]chat_response = client.chat_completion(model_name, messages)print(f"聊天回复: {chat_response}")if __name__ == "__main__":test_ollama_client()

部署方案对比分析

为了帮助读者选择合适的技术方案,我们提供一个详细的对比分析:

特性OpenAI APIHuggingFace TransformersvLLMOllamaTGI
部署复杂度无需部署中等中等简单中等
推理速度快(云端)中等很快很快
内存效率无需关心较低中等
支持模型有限丰富主流模型精选模型主流模型
成本按使用付费免费(自备硬件)免费(自备硬件)免费(自备硬件)免费(自备硬件)
生产就绪需要优化适合开发
函数调用支持原生支持需要自定义需要自定义有限支持需要自定义

实战:构建完整的函数调用系统

下面我们构建一个完整的函数调用系统,集成多种工具和错误处理机制:

import asyncio
from datetime import datetime
from typing import Dict, List, Any, Callable
import logging# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)class ToolRegistry:"""工具注册表,管理所有可用工具"""def __init__(self):self.tools: Dict[str, Dict] = {}def register_tool(self, name: str, description: str, parameters: Dict, function: Callable) -> None:"""注册工具Args:name: 工具名称description: 工具描述parameters: 参数定义function: 工具函数"""self.tools[name] = {"description": description,"parameters": parameters,"function": function}def get_tool_schema(self, name: str) -> Dict:"""获取工具的模式定义"""if name not in self.tools:raise ValueError(f"工具未注册: {name}")tool = self.tools[name]return {"type": "function","function": {"name": name,"description": tool["description"],"parameters": {"type": "object","properties": tool["parameters"],"required": list(tool["parameters"].keys()),"additionalProperties": False}}}def get_all_schemas(self) -> List[Dict]:"""获取所有工具的模式"""return [self.get_tool_schema(name) for name in self.tools.keys()]def execute_tool(self, name: str, arguments: Dict) -> Any:"""执行工具"""if name not in self.tools:raise ValueError(f"工具未注册: {name}")tool = self.tools[name]return tool["function"](**arguments)class RobustFunctionCallingAgent:"""健壮的函数调用代理"""def __init__(self, api_key: str, tool_registry: ToolRegistry):self.client = openai.OpenAI(api_key=api_key)self.tool_registry = tool_registryself.conversation_history = []self.max_retries = 3async def process_with_retry(self, user_input: str) -> str:"""带重试的处理流程Args:user_input: 用户输入Returns:str: 最终回复"""for attempt in range(self.max_retries):try:return await self._process_single_attempt(user_input)except Exception as e:logger.error(f"第{attempt + 1}次尝试失败: {str(e)}")if attempt == self.max_retries - 1:return "抱歉,处理您的请求时出现了问题,请稍后重试。"await asyncio.sleep(1)  # 等待后重试async def _process_single_attempt(self, user_input: str) -> str:"""单次处理尝试"""# 添加用户消息self.conversation_history.append({"role": "user", "content": user_input})# 获取工具模式tools = self.tool_registry.get_all_schemas()# 第一次LLM调用first_response = await asyncio.get_event_loop().run_in_executor(None,lambda: self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history,tools=tools,tool_choice="auto"))response_message = first_response.choices[0].messagetool_calls = response_message.tool_calls# 记录助手回复assistant_msg = {"role": "assistant","content": response_message.content or ""}if tool_calls:assistant_msg["tool_calls"] = []for tool_call in tool_calls:assistant_msg["tool_calls"].append({"id": tool_call.id,"type": "function","function": {"name": tool_call.function.name,"arguments": tool_call.function.arguments}})self.conversation_history.append(assistant_msg)# 处理工具调用if tool_calls:tool_results = []for tool_call in tool_calls:try:function_name = tool_call.function.namefunction_args = json.loads(tool_call.function.arguments)# 执行工具result = await asyncio.get_event_loop().run_in_executor(None,lambda: self.tool_registry.execute_tool(function_name, function_args))tool_results.append({"tool_call_id": tool_call.id,"content": str(result)})except Exception as e:logger.error(f"工具执行失败: {str(e)}")tool_results.append({"tool_call_id": tool_call.id,"content": f"工具执行错误: {str(e)}"})# 添加工具结果for result in tool_results:self.conversation_history.append({"role": "tool","tool_call_id": result["tool_call_id"],"content": result["content"]})# 第二次LLM调用second_response = await asyncio.get_event_loop().run_in_executor(None,lambda: self.client.chat.completions.create(model="gpt-3.5-turbo",messages=self.conversation_history))final_message = second_response.choices[0].message.contentself.conversation_history.append({"role": "assistant", "content": final_message})return final_messageelse:final_message = response_message.content or "抱歉,我没有理解您的请求。"self.conversation_history.append({"role": "assistant", "content": final_message})return final_message# 定义和注册工具
def setup_tool_registry() -> ToolRegistry:"""设置工具注册表"""registry = ToolRegistry()# 注册天气查询工具registry.register_tool(name="get_weather",description="获取指定城市的天气信息",parameters={"location": {"type": "string","description": "城市名称"},"unit": {"type": "string","enum": ["celsius", "fahrenheit"],"description": "温度单位","default": "celsius"}},function=lambda location, unit="celsius": f"{location}的天气:25°{unit[0].upper()},晴朗"  # 模拟实现)# 注册计算器工具registry.register_tool(name="calculate",description="执行数学计算",parameters={"expression": {"type": "string","description": "数学表达式"}},function=lambda expression: f"计算结果: {eval(expression)}"  # 注意:生产环境需要更安全的实现)# 注册时间查询工具registry.register_tool(name="get_current_time",description="获取当前时间",parameters={},function=lambda: f"当前时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")return registry# 异步测试函数
async def test_robust_agent():"""测试健壮的代理"""registry = setup_tool_registry()agent = RobustFunctionCallingAgent(api_key="your-api-key-here", tool_registry=registry)test_cases = ["今天北京的天气怎么样?","计算一下(15 + 25) * 3等于多少","现在几点了?","先查天气再计算时间"  # 复杂请求]for case in test_cases:print(f"用户: {case}")response = await agent.process_with_retry(case)print(f"助手: {response}")print("=" * 60)if __name__ == "__main__":# 运行异步测试asyncio.run(test_robust_agent())

总结

本文详细介绍了大语言模型函数调用技术的原理与实践,涵盖了从基础的OpenAI Function Calling到本地部署的多种方案。通过完整的代码示例和详细的注释,读者可以:

  1. 理解函数调用的核心概念:掌握JSON Schema定义、Tool Call流程等关键技术
  2. 掌握多种部署方案:了解OpenAI API、HuggingFace Transformers、vLLM、Ollama等方案的优缺点
  3. 构建健壮的系统:学习错误处理、重试机制、工具注册等工程实践
  4. 应对实际业务需求:通过完整的示例代码,快速上手开发

函数调用技术让大语言模型从单纯的文本生成器升级为能够执行复杂任务的智能代理,是构建实用AI应用的关键技术。随着技术的不断发展,这项技术将在更多场景中发挥重要作用。

在下一篇文章中,我们将深入探讨LangChain框架,学习如何利用其强大的组件构建更复杂的AI应用。

http://www.dtcms.com/a/564899.html

相关文章:

  • 在Windows系统上部署 CosyVoice 2
  • kafka kraft 模式简介
  • 【Html模板】赛博朋克风格数据分析大屏(已上线-可预览)
  • 怎么查网站备案号济南市章丘区建设局网站
  • 颠覆编码范式:Cursor 2.0五大新特性深度解析与AI编程未来洞察
  • 科技引领,档案管理更高效之智慧档案馆三维立体恒温恒湿消毒净化系统
  • MySQL的SUBSTRING函数详解与应用
  • 微企点建站效果付费合肥网络推广外包
  • 企业级管理平台项目设计、架构、业务全解之平台篇
  • android TAB切换
  • 免费试用网站源码上海网站建设穹拓
  • Linux的df和du
  • 【保姆级教程】Debian 服务器 MariaDB/Mysql 配置 Windows 远程连接全流程
  • JAVA算法练习题day58
  • linux-用户和组权限
  • 基于Vue+Python+Orange Pi Zero3的完整视频监控方案
  • 若依开源项目做导入数据时同步新增字典,页面下拉框与表格未同步更新问题
  • 网站权重多少4赤峰网站建设哪个服务好
  • 珠海seo海网站建设南京做网站建设搭建的公司
  • 仓储物流人力如何管理?实时看板动态展示进度,支持管理者即时调整人力
  • 系统架构设计师备考第62天——嵌入式系统软件架构设计方法
  • LeetCode 刷题【143. 重排链表】
  • 网站建设与管理工资wordpress仪表盘添加内容
  • 常见的分布式系统面试题清单
  • 基于 U-Net 的医学图像分割
  • 【图像处理基石】多频谱图像融合算法入门
  • 室温反应蒸发+200℃退火调控 MoOₓ/NiOₓ薄膜:光伏空穴传输材料性能优化与效率潜力(>25%)分析
  • 微算法科技(NASDAQ MLGO):DPoS驱动区块链治理与DAO机制融合,共筑Web3.0坚实基石
  • 视频直播点播平台EasyDSS:助力现代农业驶入数字科技“快车道”
  • 迈网科技 官方网站网站建设调研问卷