当前位置: 首页 > news >正文

LangGraph--结构化输出(.with_structured_output() 方法)

我这边没有学习过langchain相关知识,但是当学习langgraph时经常会遇到增强的结构化输出,也就是前面总结的第一个模式,增强型结构化输出,一直没有深入理解这些东西,最近随着深入的研究开始慢慢理解了,因此做个总结,场景如下:
 

class Section(BaseModel):name: str = Field(description="Name for this section of the report.",)description: str = Field(description="Brief overview of the main topics and concepts to be covered in this section.",)class Sections(BaseModel):sections: List[Section] = Field(description="Sections of the report.",)# Augment the LLM with schema for structured output
planner = llm.with_structured_output(Sections)

上面的代码就是一个标准的增强的结构化输出,那么这个 llm.with_structured_output(Sections)是怎么工作的呢?为什么会有多种类型,他们之间有什么区别呢?今天我们就好好理解一下:

具体参考了langchain相关的文档:如何从模型返回结构化数据|朗🦜️🔗链 --- How to return structured data from a model | 🦜️🔗 LangChain

with_structured_output如何让让模型返回与特定模式匹配的输出通常很有用。一个常见的用例是从文本中提取数据以插入数据库或与其他下游系统一起使用。本指南涵盖了从模型中获取结构化输出的一些策略。

这是获得结构化输出的最简单、最可靠的方法。with_structured_output() 是为那些为结构化输出 (如工具/函数调用或 JSON 模式)提供原生 API 的模型实现的,并在后台使用这些功能。

该方法接受一个模式作为输入,该模式指定所需输出属性的名称、类型和描述。该方法返回一个类似于模型的 Runnable,只是它输出的不是字符串或消息, 而是对应于给定模式的对象。模式可以指定为 TypedDict 类、JSON Schema 或 Pydantic 类。如果使用 TypedDict 或 JSON Schema,则 Runnable 将返回一个字典,如果使用 Pydantic 类,则将返回一个 Pydantic 对象。

作为一个例子,让我们得到一个模型来生成一个笑话,并将设置与笑点分离:

Pydantic class  Pydantic 类

如果我们希望模型返回一个 Pydantic 对象,我们只需要传入所需的 Pydantic 类。使用 Pydantic 的主要优点是模型生成的输出将被验证。如果任何必需的字段丢失或任何字段的类型错误,Pydantic 将引发一个错误。

from typing import Optionalfrom pydantic import BaseModel, Field# Pydantic
class Joke(BaseModel):"""Joke to tell user."""setup: str = Field(description="The setup of the joke")punchline: str = Field(description="The punchline to the joke")rating: Optional[int] = Field(default=None, description="How funny the joke is, from 1 to 10")structured_llm = llm.with_structured_output(Joke)structured_llm.invoke("Tell me a joke about cats")
Joke(setup='Why was the cat sitting on the computer?', 
punchline='Because it wanted to keep an eye on the mouse!',rating=7)

除了 Pydantic 类的结构之外,Pydantic 类的名称、文档字符串以及参数的名称和提供的描述都非常重要。大多数时候,with_structured_output 使用模型的函数/工具调用 API,您可以有效地将所有这些信息视为添加到模型提示符中。

TypedDict or JSON Schema  TypedDict 或 JSON 模式

如果你不想使用 Pydantic,明确地不想验证参数,或者希望能够流模型输出,你可以使用 TypedDict 类定义你的模式。我们可以选择使用 LangChain 支持的特殊注释语法,允许您指定字段的默认值和描述。请注意,如果模型没有生成默认值,则不会自动填充默认值,它仅用于定义传递给模型的模式。

from typing import Optionalfrom typing_extensions import Annotated, TypedDict# TypedDict
class Joke(TypedDict):"""Joke to tell user."""setup: Annotated[str, ..., "The setup of the joke"]# Alternatively, we could have specified setup as:# setup: str                    # no default, no description# setup: Annotated[str, ...]    # no default, no description# setup: Annotated[str, "foo"]  # default, no descriptionpunchline: Annotated[str, ..., "The punchline of the joke"]rating: Annotated[Optional[int], None, "How funny the joke is, from 1 to 10"]structured_llm = llm.with_structured_output(Joke)structured_llm.invoke("Tell me a joke about cats")
{'setup': 'Why was the cat sitting on the computer?','punchline': 'Because it wanted to keep an eye on the mouse!','rating': 7}

同样,我们可以传入一个 JSON Schemadict。这不需要导入或类,并且非常清楚地说明了每个参数是如何记录的,代价是有点冗长。

json_schema = {"title": "joke","description": "Joke to tell user.","type": "object","properties": {"setup": {"type": "string","description": "The setup of the joke",},"punchline": {"type": "string","description": "The punchline to the joke",},"rating": {"type": "integer","description": "How funny the joke is, from 1 to 10","default": None,},},"required": ["setup", "punchline"],
}
structured_llm = llm.with_structured_output(json_schema)structured_llm.invoke("Tell me a joke about cats")
{'setup': 'Why was the cat sitting on the computer?','punchline': 'Because it wanted to keep an eye on the mouse!','rating': 7}

在多个模式之间选择

让模型从多个模式中进行选择的最简单方法是创建一个具有 Union 类型属性的父模式。

使用 Pydantic

from typing import Unionclass Joke(BaseModel):"""Joke to tell user."""setup: str = Field(description="The setup of the joke")punchline: str = Field(description="The punchline to the joke")rating: Optional[int] = Field(default=None, description="How funny the joke is, from 1 to 10")class ConversationalResponse(BaseModel):"""Respond in a conversational manner. Be kind and helpful."""response: str = Field(description="A conversational response to the user's query")class FinalResponse(BaseModel):final_output: Union[Joke, ConversationalResponse]structured_llm = llm.with_structured_output(FinalResponse)structured_llm.invoke("Tell me a joke about cats")
FinalResponse(final_output=Joke(setup='Why was the cat sitting on the computer?',punchline='Because it wanted to keep an eye on the mouse!', rating=7))
 structured_llm.invoke("How are you today?")
FinalResponse(final_output=ConversationalResponse(
response="I'm just a computer program, so I don't have feelings, 
but I'm here and ready to help you with whatever you need!"))
使用 TypedDict
from typing import Optional, Unionfrom typing_extensions import Annotated, TypedDictclass Joke(TypedDict):"""Joke to tell user."""setup: Annotated[str, ..., "The setup of the joke"]punchline: Annotated[str, ..., "The punchline of the joke"]rating: Annotated[Optional[int], None, "How funny the joke is, from 1 to 10"]class ConversationalResponse(TypedDict):"""Respond in a conversational manner. Be kind and helpful."""response: Annotated[str, ..., "A conversational response to the user's query"]class FinalResponse(TypedDict):final_output: Union[Joke, ConversationalResponse]structured_llm = llm.with_structured_output(FinalResponse)structured_llm.invoke("Tell me a joke about cats")
{'final_output': {'setup': 'Why was the cat sitting on the computer?','punchline': 'Because it wanted to keep an eye on the mouse!','rating': 7}}
structured_llm.invoke("How are you today?")
{'final_output': {'response': "I'm just a computer program, 
so I don't have feelings, 
but I'm here and ready to help you with whatever you need!"}}

response与 Pydantic 示例中显示的响应相同。或者,您可以直接使用工具调用来允许模型在选项之间进行选择(如果您选择的模型支持)。这涉及更多的解析和设置,但在某些情况下会带来更好的性能,因为您不必使用嵌套模式。请参阅此操作指南以了解更多详细信息。

Streaming  流

当输出类型是 dict 时,我们可以从结构化模型中流式输出(即,当模式被指定为 TypedDict 类或 JSON 模式 dict 时)。

from typing_extensions import Annotated, TypedDict# TypedDict
class Joke(TypedDict):"""Joke to tell user."""setup: Annotated[str, ..., "The setup of the joke"]punchline: Annotated[str, ..., "The punchline of the joke"]rating: Annotated[Optional[int], None, "How funny the joke is, from 1 to 10"]structured_llm = llm.with_structured_output(Joke)for chunk in structured_llm.stream("Tell me a joke about cats"):print(chunk)

Few-shot prompting 

对于更复杂的模式,在提示符中添加少量示例非常有用。这可以通过几种方式来实现。

最简单和最通用的方法是在提示符中向系统消息添加示例:

from langchain_core.prompts import ChatPromptTemplatesystem = """You are a hilarious comedian. Your specialty is knock-knock jokes. \
Return a joke which has the setup (the response to "Who's there?") and the final punchline (the response to "<setup> who?").Here are some examples of jokes:example_user: Tell me a joke about planes
example_assistant: {{"setup": "Why don't planes ever get tired?", "punchline": "Because they have rest wings!", "rating": 2}}example_user: Tell me another joke about planes
example_assistant: {{"setup": "Cargo", "punchline": "Cargo 'vroom vroom', but planes go 'zoom zoom'!", "rating": 10}}example_user: Now about caterpillars
example_assistant: {{"setup": "Caterpillar", "punchline": "Caterpillar really slow, but watch me turn into a butterfly and steal the show!", "rating": 5}}"""prompt = ChatPromptTemplate.from_messages([("system", system), ("human", "{input}")])few_shot_structured_llm = prompt | structured_llm
few_shot_structured_llm.invoke("what's something funny about woodpeckers")
{'setup': 'Woodpecker','punchline': "Woodpecker you a joke, but I'm afraid it might be too 'hole-some'!",'rating': 7}

当结构化输出的底层方法是工具调用时,我们可以将示例作为显式工具调用传递。您可以检查您正在使用的模型是否在其 API 引用中使用了工具调用。

from langchain_core.messages import AIMessage, HumanMessage, ToolMessageexamples = [HumanMessage("Tell me a joke about planes", name="example_user"),AIMessage("",name="example_assistant",tool_calls=[{"name": "joke","args": {"setup": "Why don't planes ever get tired?","punchline": "Because they have rest wings!","rating": 2,},"id": "1",}],),# Most tool-calling models expect a ToolMessage(s) to follow an AIMessage with tool calls.ToolMessage("", tool_call_id="1"),# Some models also expect an AIMessage to follow any ToolMessages,# so you may need to add an AIMessage here.HumanMessage("Tell me another joke about planes", name="example_user"),AIMessage("",name="example_assistant",tool_calls=[{"name": "joke","args": {"setup": "Cargo","punchline": "Cargo 'vroom vroom', but planes go 'zoom zoom'!","rating": 10,},"id": "2",}],),ToolMessage("", tool_call_id="2"),HumanMessage("Now about caterpillars", name="example_user"),AIMessage("",tool_calls=[{"name": "joke","args": {"setup": "Caterpillar","punchline": "Caterpillar really slow, but watch me turn into a butterfly and steal the show!","rating": 5,},"id": "3",}],),ToolMessage("", tool_call_id="3"),
]
system = """You are a hilarious comedian. Your specialty is knock-knock jokes. \
Return a joke which has the setup (the response to "Who's there?") \
and the final punchline (the response to "<setup> who?")."""prompt = ChatPromptTemplate.from_messages([("system", system), ("placeholder", "{examples}"), ("human", "{input}")]
)
few_shot_structured_llm = prompt | structured_llm
few_shot_structured_llm.invoke({"input": "crocodiles", "examples": examples})
{'setup': 'Crocodile','punchline': 'Crocodile be seeing you later, alligator!','rating': 6}

(高级)优化结构化输出的方法

对于支持一种以上结构化输出方法的模型(即,它们支持工具调用和 JSON 模式),您可以通过 method= 参数指定要使用的方法。

structured_llm = llm.with_structured_output(None, method="json_mode")structured_llm.invoke("Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys"
)
{'setup': 'Why was the cat sitting on the computer?','punchline': 'Because it wanted to keep an eye on the mouse!'}

(高级)原始输出

LLM 在生成结构化输出方面并不完美,特别是当模式变得复杂时。您可以避免引发异常,并通过传递 include_raw=True 自己处理原始输出。这将更改输出格式,以包含原始消息输出、 解析值(如果成功)和任何产生的错误:

structured_llm = llm.with_structured_output(Joke, include_raw=True)structured_llm.invoke("Tell me a joke about cats")
{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'function': {'arguments': '{"setup":"Why was the cat sitting on the computer?","punchline":"Because it wanted to keep an eye on the mouse!","rating":7}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 33, 'prompt_tokens': 93, 'total_tokens': 126}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-d880d7e2-df08-4e9e-ad92-dfc29f2fd52f-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}, 'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 93, 'output_tokens': 33, 'total_tokens': 126}),'parsed': {'setup': 'Why was the cat sitting on the computer?','punchline': 'Because it wanted to keep an eye on the mouse!','rating': 7},'parsing_error': None}

直接解析模型输出

并非所有模型都支持 .with_structured_output(),因为并非所有模型都支持工具调用或 JSON 模式。对于这样的模型,您需要直接提示模型使用特定的格式,并使用输出解析器从原始模型输出中提取结构化响应。

使用 PydanticOutputParser

from typing import Listfrom langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Fieldclass Person(BaseModel):"""Information about a person."""name: str = Field(..., description="The name of the person")height_in_meters: float = Field(..., description="The height of the person expressed in meters.")class People(BaseModel):"""Identifying information about all people in a text."""people: List[Person]# Set up a parser
parser = PydanticOutputParser(pydantic_object=People)# Prompt
prompt = ChatPromptTemplate.from_messages([("system","Answer the user query. Wrap the output in `json` tags\n{format_instructions}",),("human", "{query}"),]
).partial(format_instructions=parser.get_format_instructions())
query = "Anna is 23 years old and she is 6 feet tall"print(prompt.invoke({"query": query}).to_string())
System: Answer the user query. Wrap the output in `json` tags
The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.Here is the output schema:
\`\`\`
{"description": "Identifying information about all people in a text.", "properties": {"people": {"title": "People", "type": "array", "items": {"$ref": "#/definitions/Person"}}}, "required": ["people"], "definitions": {"Person": {"title": "Person", "description": "Information about a person.", "type": "object", "properties": {"name": {"title": "Name", "description": "The name of the person", "type": "string"}, "height_in_meters": {"title": "Height In Meters", "description": "The height of the person expressed in meters.", "type": "number"}}, "required": ["name", "height_in_meters"]}}}
\`\`\`
Human: Anna is 23 years old and she is 6 feet tall

自定义解析

您还可以使用 LangChain 表达式语言(LCEL) 创建自定义提示符和解析器,使用普通函数解析模型的输出:

import json
import re
from typing import Listfrom langchain_core.messages import AIMessage
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Fieldclass Person(BaseModel):"""Information about a person."""name: str = Field(..., description="The name of the person")height_in_meters: float = Field(..., description="The height of the person expressed in meters.")class People(BaseModel):"""Identifying information about all people in a text."""people: List[Person]# Prompt
prompt = ChatPromptTemplate.from_messages([("system","Answer the user query. Output your answer as JSON that  ""matches the given schema: \`\`\`json\n{schema}\n\`\`\`. ""Make sure to wrap the answer in \`\`\`json and \`\`\` tags",),("human", "{query}"),]
).partial(schema=People.schema())# Custom parser
def extract_json(message: AIMessage) -> List[dict]:"""Extracts JSON content from a string where JSON is embedded between \`\`\`json and \`\`\` tags.Parameters:text (str): The text containing the JSON content.Returns:list: A list of extracted JSON strings."""text = message.content# Define the regular expression pattern to match JSON blockspattern = r"\`\`\`json(.*?)\`\`\`"# Find all non-overlapping matches of the pattern in the stringmatches = re.findall(pattern, text, re.DOTALL)# Return the list of matched JSON strings, stripping any leading or trailing whitespacetry:return [json.loads(match.strip()) for match in matches]except Exception:raise ValueError(f"Failed to parse: {message}")
query = "Anna is 23 years old and she is 6 feet tall"print(prompt.format_prompt(query=query).to_string())
System: Answer the user query. Output your answer as JSON that  matches the given schema: \`\`\`json
{'title': 'People', 'description': 'Identifying information about all people in a text.', 'type': 'object', 'properties': {'people': {'title': 'People', 'type': 'array', 'items': {'$ref': '#/definitions/Person'}}}, 'required': ['people'], 'definitions': {'Person': {'title': 'Person', 'description': 'Information about a person.', 'type': 'object', 'properties': {'name': {'title': 'Name', 'description': 'The name of the person', 'type': 'string'}, 'height_in_meters': {'title': 'Height In Meters', 'description': 'The height of the person expressed in meters.', 'type': 'number'}}, 'required': ['name', 'height_in_meters']}}}
\`\`\`. Make sure to wrap the answer in \`\`\`json and \`\`\` tags
Human: Anna is 23 years old and she is 6 feet tall

当我们调用它时,它看起来是这样的:

chain = prompt | llm | extract_jsonchain.invoke({"query": query})
[{'people': [{'name': 'Anna', 'height_in_meters': 1.8288}]}]

总结:

大家还是多了解一下那几个字典,我这边使用deepseek问了一下:

在 Pydantic 中,BaseModel 和 Field 是两个核心组件,用于数据验证和设置管理。以下是它们的主要作用:


1. BaseModel

BaseModel 是 Pydantic 的基类,用于定义数据模型(类似数据类或结构体)。它的主要功能包括:

  • 数据验证:自动验证输入数据是否符合类型注解(如 strintList 等)。

  • 类型转换:尝试将输入数据转换为模型定义的字段类型(如字符串数字 "123" 转整型 123)。

  • 序列化/反序列化:提供 .dict().json() 方法,方便与 JSON/字典互转。

  • IDE 友好:基于类型注解,编辑器能提供自动补全和类型检查。

示例:
pythonfrom pydantic import BaseModelclass User(BaseModel):name: strage: int# 自动验证和转换
user = User(name="Alice", age="25")  # 字符串 "25" 被转成整型 25
print(user.age)  # 输出: 25# 无效数据会报错
User(name="Bob", age="not_a_number")  # 抛出 ValidationError

2. Field

Field 用于对模型字段添加额外的约束或元数据,例如:

  • 验证规则:如最小值(ge=1)、正则表达式(regex=r"^[A-Z]")。

  • 默认值:动态默认值(如 default_factory=lambda: datetime.now())。

  • 文档说明:通过 description 添加字段描述(会显示在生成的 JSON Schema 中)。

示例:
pythonfrom pydantic import BaseModel, Fieldclass Product(BaseModel):name: str = Field(..., min_length=1, max_length=50, example="Laptop")price: float = Field(gt=0, description="价格必须大于 0")tags: list[str] = Field(default_factory=list)# 使用
product = Product(name="Phone", price=999.99)
print(product.model_json_schema())
# 输出会包含字段的约束和描述信息

关键区别

组件作用
BaseModel定义数据模型,提供验证、转换、序列化等核心功能。
Field细化字段的规则(如校验、默认值)和元数据(如描述、示例)。

常见用途

  • API 开发:在 FastAPI 中,BaseModel 定义请求/响应模型,Field 添加参数校验。

  • 配置文件加载:验证配置文件(如 YAML/JSON)的结构和内容。

  • 数据预处理:确保输入数据符合预期格式。

通过组合两者,可以构建强类型、安全且自文档化的数据模型。

Annotated 和TypedDict 

在 Python 的 typing_extensions 模块中,Annotated 和 TypedDict 是用于增强类型提示(Type Hints)功能的工具,它们的主要作用如下:


1. Annotated

Annotated 用于为类型添加额外的元数据(metadata),而不会改变类型本身。它通常与 Pydantic 的 Field 或其他验证库结合使用,提供更丰富的类型约束或文档说明。

核心用途:
  • 附加元数据:例如校验规则、文档描述、示例值等。

  • 与 Pydantic 配合:在 Pydantic 模型中,Annotated 可以替代直接使用 Field,使代码更简洁。

  • 静态类型检查友好:元数据对类型检查器(如 mypy)透明,不影响类型推断。

示例:
pythonfrom typing_extensions import Annotated
from pydantic import BaseModel, Field# 用 Annotated 定义带约束的字段类型
NameType = Annotated[str, Field(min_length=1, max_length=50, example="Alice")]class User(BaseModel):name: NameTypeage: Annotated[int, Field(gt=0)]# 等价于直接使用 Field
class UserTraditional(BaseModel):name: str = Field(..., min_length=1, max_length=50)age: int = Field(gt=0)

2. TypedDict

TypedDict 用于定义具有固定键和特定值类型的字典结构(类似 TypeScript 的接口)。它可以帮助静态类型检查器识别字典的“形状”,避免键名拼写错误或值类型不匹配的问题。

核心用途:
  • 结构化字典:明确字典的键名和每个键对应的类型。

  • 替代 Dict[str, Any]:提供更精确的类型提示,提升代码安全性。

  • JSON 数据处理:在解析 JSON 或处理 API 响应时,确保数据结构符合预期。

示例:
from typing_extensions import TypedDict# 定义一个 TypedDict
class UserProfile(TypedDict):name: strage: intis_active: bool# 合法数据
user1: UserProfile = {"name": "Alice", "age": 25, "is_active": True}# 类型检查会捕获错误
user2: UserProfile = {"name": "Bob", "age": "30"}  # 错误: age 应为 int, 缺少 is_active
 

关键区别

组件作用
Annotated为类型附加元数据(如校验规则、文档),不改变类型本身。
TypedDict定义字典的固定结构,明确键名和值的类型。

常见使用场景

  • Annotated

    • 在 Pydantic 中替代 Field 简化模型定义。

    • 为类型添加自定义标记(如单位 meters: Annotated[float, "meters"])。

  • TypedDict

    • 处理动态语言中的字典数据(如 JSON、API 响应)。

    • 替代 dataclass 或 NamedTuple,当需要字典的灵活性时。


与 Pydantic 的协作

  • Annotated + Field:在 Pydantic 模型中内联字段约束。

  • TypedDict:可作为 Pydantic 模型的替代方案(但无运行时验证,需配合 mypy)。

协作示例:
from pydantic import BaseModel
from typing_extensions import Annotated, TypedDict# 方式1: Pydantic 模型(带运行时验证)
class UserModel(BaseModel):name: Annotated[str, Field(max_length=10)]age: int# 方式2: TypedDict(仅静态类型检查)
class UserDict(TypedDict):name: strage: int

总结

  • 需要运行时验证 → 用 Pydantic 的 BaseModel + Field/Annotated

  • 仅需静态类型检查 → 用 TypedDict

  • 需要为类型添加元数据 → 用 Annotated

相关文章:

  • 【论文笔记】【强化微调】AgentThink:思维链推理 + 工具调用
  • 高度雾实时渲染~轻松营造GIS场景真实感
  • 2025虚幻引擎文件与文件夹命名规律
  • ssh 服务和 rsync 数据同步
  • MFC中使用CRichEditCtrl控件让文本框中的内容部分加粗
  • 面试第三期
  • C#语言入门-task2 :C# 语言的基本语法结构
  • C#实现语音预处理:降噪/静音检测/自动增益
  • 河马剧场多部自制剧霸榜,短剧精品化战略持续推进
  • 二十章:ps结合插件stable diffusion
  • 【LLM学习笔记3】搭建基于chatgpt的问答系统(下)
  • CFG的前世今生
  • 拼多多商家端 anti_content 补环境分析
  • 网页后端开发(基础4--数据库MySQL)
  • Xsens动作捕捉技术用于研究机器人的运动控制、姿态调整以及人机交互
  • 不同程度多径效应影响下的无线通信网络电磁信号仿真数据生成程序
  • 【lenovo】LEGION 2020款跳过windows账号登录
  • 【MySQL篇01】补充:索引体系大总结(数据库原理篇)
  • C++ 性能分析工具:Valgrind 与 perf
  • Redis 的优势有哪些,它是CP 还是 AP?CAP 理论又是什么?
  • 个人网站怎么自己备案/2023第二波疫情已经到来了吗
  • 东莞网站建设工作室/一呼百应推广平台
  • 做网站需要准备什么资料/整站优化seo平台
  • 平面设计空间构成图片/太原seo优化公司
  • 成品网站/整合营销传播成功案例
  • 如果做vr参观网站/域名解析ip138在线查询