当前位置: 首页 > news >正文

Qwen3-Reranker-0.6B 模型结构

模型加载

import torch
from modelscope import AutoModel, AutoTokenizer, AutoModelForCausalLMtokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-0.6B", padding_side='left')
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B").eval()

模型结构与配置

model
Qwen3ForCausalLM((model): Qwen3Model((embed_tokens): Embedding(151669, 1024)(layers): ModuleList((0-27): 28 x Qwen3DecoderLayer((self_attn): Qwen3Attention((q_proj): Linear(in_features=1024, out_features=2048, bias=False)(k_proj): Linear(in_features=1024, out_features=1024, bias=False)(v_proj): Linear(in_features=1024, out_features=1024, bias=False)(o_proj): Linear(in_features=2048, out_features=1024, bias=False)(q_norm): Qwen3RMSNorm((128,), eps=1e-06)(k_norm): Qwen3RMSNorm((128,), eps=1e-06))(mlp): Qwen3MLP((gate_proj): Linear(in_features=1024, out_features=3072, bias=False)(up_proj): Linear(in_features=1024, out_features=3072, bias=False)(down_proj): Linear(in_features=3072, out_features=1024, bias=False)(act_fn): SiLU())(input_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)(post_attention_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)))(norm): Qwen3RMSNorm((1024,), eps=1e-06)(rotary_emb): Qwen3RotaryEmbedding())(lm_head): Linear(in_features=1024, out_features=151669, bias=False)
)
model.config
Qwen3Config {"architectures": ["Qwen3ForCausalLM"],"attention_bias": false,"attention_dropout": 0.0,"bos_token_id": 151643,"eos_token_id": 151645,"head_dim": 128,"hidden_act": "silu","hidden_size": 1024,"initializer_range": 0.02,"intermediate_size": 3072,"layer_types": ["full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention"],"max_position_embeddings": 40960,"max_window_layers": 28,"model_type": "qwen3","num_attention_heads": 16,"num_hidden_layers": 28,"num_key_value_heads": 8,"rms_norm_eps": 1e-06,"rope_scaling": null,"rope_theta": 1000000,"sliding_window": null,"tie_word_embeddings": true,"torch_dtype": "float32","transformers_version": "4.55.2","use_cache": true,"use_sliding_window": false,"vocab_size": 151669
}

模型使用

def format_instruction(instruction, query, doc):if instruction is None:instruction = 'Given a web search query, retrieve relevant passages that answer the query'output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)return outputdef process_inputs(pairs):inputs = tokenizer(pairs, padding=False, truncation='longest_first',return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens))for i, ele in enumerate(inputs['input_ids']):inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokensinputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)for key in inputs:inputs[key] = inputs[key].to(model.device)return inputs@torch.no_grad()
def compute_logits(inputs, **kwargs):batch_scores = model(**inputs).logits[:, -1, :]true_vector = batch_scores[:, token_true_id]false_vector = batch_scores[:, token_false_id]batch_scores = torch.stack([false_vector, true_vector], dim=1)batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)scores = batch_scores[:, 1].exp().tolist()return scores# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
token_false_id = tokenizer.convert_tokens_to_ids("no")
token_true_id = tokenizer.convert_tokens_to_ids("yes")
max_length = 8192prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)task = 'Given a web search query, retrieve relevant passages that answer the query'queries = ["What is the capital of China?","Explain gravity",
]documents = ["The capital of China is Beijing.","Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]# Tokenize the input texts
inputs = process_inputs(pairs)
scores = compute_logits(inputs)print("scores: ", scores)
scores:  [0.9994982481002808, 0.9993619322776794]
http://www.dtcms.com/a/363544.html

相关文章:

  • Coze平台指南(2):开发环境的搭建与配置
  • Cisco FMC利用sftp Server拷贝文件方法
  • Ubuntu中配置JMmeter工具
  • 从零开始:用代码解析区块链的核心工作原理
  • Ubuntu 24.04 服务器配置MySQL 8.0.42 三节点集群(一主两从架构)安装部署配置教程
  • 软件设计师——软件工程学习笔记
  • 矩阵scaling预处理介绍
  • AI代码生成神器终极对决:CodeLlama vs StarCoder vs Codex,谁才是开发者的「最佳拍档」?
  • STM32CUBEMX配置LAN8720a实现UDP通信
  • 【C++游记】红黑树
  • 嵌入式C语言之链表冒泡排序
  • Java基础第9天总结(可变参数、Collections、斗地主)
  • 深入浅出数据库事务:从原理到实践,解决 Spring 事务与外部进程冲突问题
  • github下载的文件内容类似文件哈希和存储路径原因
  • Kafka 分层存储(Tiered Storage)从 0 到 1 的配置、调优与避坑
  • Vue3 实现自定义指令点击空白区域关闭下拉框
  • 【51单片机】【protues仿真】 基于51单片机智能电子秤系统
  • 工业界实战之数据存储格式与精度
  • 嵌入式解谜日志-网络编程
  • 浏览器面试题及详细答案 88道(56-66)
  • MySQL查询limit 0,100和limit 10000000,100有什么区别?
  • 敏捷规模化管理工具实战指南:如何实现跨团队依赖可视化?
  • 数据库驱动改造加密姓名手机号证件号邮箱敏感信息
  • web自动化测试(selenium)
  • RK-Android15-WIFI白名单功能实现
  • 一次别开生面的Java面试
  • Servlet基础
  • Redisson分布式锁会发生死锁问题吗?怎么发生的?
  • Aurobay EDI 需求分析:OFTP2 与 EDIFACT 驱动的汽车供应链数字化
  • UniApp 实现搜索页逻辑详解