当前位置: 首页 > news >正文

基于多头自注意力机制(MHSA)增强的YOLOv11主干网络—面向高精度目标检测的结构创新与性能优化

深度学习在计算机视觉领域的快速发展推动了目标检测算法的持续进步。作为实时检测框架的典型代表,YOLO系列凭借其高效性与准确性备受关注。本文提出一种基于多头自注意力机制(Multi-Head Self-Attention, MHSA)增强的YOLOv11主干网络结构,旨在提升模型在复杂场景下的目标特征表达与全局感知能力。通过在主干网络关键层级引入MHSA模块,有效建模长距离依赖关系,增强语义信息融合效率。目标检测作为计算机视觉的核心任务,在智能监控、自动驾驶和图像检索等领域具有广泛应用。YOLO系列模型凭借其端到端架构设计与高效推理能力,成为工业界与学术界的研究热点。YOLOv11作为该系列的最新版本,通过优化检测头结构与特征提取方式,进一步提升了整体性能。然而,在面对遮挡、尺度变化、密集目标等复杂场景时,传统卷积神经网络在局部感受野与固定权重分配方面的局限性日益凸显。近年来,注意力机制在目标检测领域得到广泛应用,其中多头自注意力机制(MHSA)因其出色的长程依赖关系捕捉能力,在图像分类、分割等任务中表现卓越。基于此,本文提出将MHSA模块集成至YOLOv11主干网络的关键阶段,构建具有更强语义表达能力的新型骨干结构,以进一步提升模型在高精度目标检测任务中的性能。

1. 多头自注意力机制(Multi-Head Self-Attention)

论文地址:https://arxiv.org/pdf/1706.03762
官方代码:https://github.com/tensorflow/tensor2tensor
在这里插入图片描述
Transformer 模型架构
多头自注意力机制(Multi-Head Self-Attention)作为Transformer模型的核心组件,通过并行处理多个注意力头,使模型能够同时捕捉序列数据中不同位置和表示子空间的信息。该机制是对自注意力(Self-Attention,又称内部注意力Intra-Attention)的扩展,其核心思想是在编码序列中每个位置时,通过加权计算充分考虑该位置与序列中其他位置的相关性。
 Multi-Head Attention

MHSA模块源自Vision TransformerViT),其核心思想是通过多头注意力机制对输入特征图进行全局建模:
Attention ( Q , K , V ) = softmax ( Q K T d k ) V \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V Attention(Q,K,V)=softmax(dk QKT)V

其中 Q , K , V Q, K, V Q,K,V 分别表示查询、键与值向量, d k d_k dk为缩放因子。通过将输入特征映射为多个头并并行计算注意力权重,最终拼接输出以增强模型的表达能力。

2. 引入MHSA加入YOLOv11中

1. 将下面代码粘贴到在/ultralytics/ultralytics/nn/modules/block.py

class MHSA(nn.Module):def __init__(self, n_dims, width=14, height=14, heads=4, pos_emb=False):super(MHSA, self).__init__()self.heads = headsself.query = nn.Conv2d(n_dims, n_dims, kernel_size=1)self.key = nn.Conv2d(n_dims, n_dims, kernel_size=1)self.value = nn.Conv2d(n_dims, n_dims, kernel_size=1)self.pos = pos_embif self.pos:self.rel_h_weight = nn.Parameter(torch.randn([1, heads, (n_dims) // heads, 1, int(height)]),requires_grad=True)self.rel_w_weight = nn.Parameter(torch.randn([1, heads, (n_dims) // heads, int(width), 1]),requires_grad=True)self.softmax = nn.Softmax(dim=-1)def forward(self, x):n_batch, C, width, height = x.size()q = self.query(x).view(n_batch, self.heads, C // self.heads, -1)k = self.key(x).view(n_batch, self.heads, C // self.heads, -1)v = self.value(x).view(n_batch, self.heads, C // self.heads, -1)content_content = torch.matmul(q.permute(0, 1, 3, 2), k)  # 1,C,h*w,h*wc1, c2, c3, c4 = content_content.size()if self.pos:content_position = (self.rel_h_weight + self.rel_w_weight).view(1, self.heads, C // self.heads, -1).permute(0, 1, 3, 2)  # 1,4,1024,64content_position = torch.matmul(content_position, q)  # ([1, 4, 1024, 256])content_position = content_position if (content_content.shape == content_position.shape) else content_position[:, :, :c3, ]assert (content_content.shape == content_position.shape)energy = content_content + content_positionelse:energy = content_contentattention = self.softmax(energy)out = torch.matmul(v, attention.permute(0, 1, 3, 2))  # 1,4,256,64out = out.view(n_batch, C, width, height)return out

2. 修改modules文件夹下的__init__.py文件,先导入函数
在这里插入图片描述
然后在下面的__all__中声明函数
在这里插入图片描述
3. 在/ultralytics/ultralytics/cfg/models/11下面新建文件yolo11_MHSA.yaml文件

  • 目标检测
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10- [-1, 1, MHSA, [14,14,4]]# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 11], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)
  • 语义分割
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10- [-1, 1, MHSA, [14,14,4]]# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 11], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, Segment, [nc, 32, 256]] # Segment(P3, P4, P5)
  • 旋转目标检测

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10- [-1, 1, MHSA, [14,14,4]]# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 11], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, OBB, [nc, 1]] # Detect(P3, P4, P5)

本文只是对YOLOv11基础上添加模块,如果要对yolo11n/l/m/x进行添加则只需要指定对应的depth_multiplewidth_multiple。

# YOLOv11n
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.25  # layer channel multiple
max_channel:1024# YOLOv11s
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.50  # layer channel multiple
max_channel:1024# YOLOv11m
depth_multiple: 0.50  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512# YOLOv11l 
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512 # YOLOv11x
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.50 # layer channel multiple
max_channel:512

4. 在task.py中进行注册,找到def parse_model函数中进行注册,添加MHSA

先在task.py导入函数
在这里插入图片描述

然后在task.py文件下找到parse_model这个函数,添加MHSA
在这里插入图片描述

5. 执行训练train.py

from ultralytics import YOLO
import warningswarnings.filterwarnings('ignore')
from pathlib import Pathif __name__ == '__main__':# 加载模型model = YOLO("ultralytics/cfg/models/11/yolo11-MHSA.yaml")  # 你要选择的模型yaml文件地址# Use the modelresults = model.train(data=r"你的数据集的yaml文件地址",epochs=100, batch=16, imgsz=640, workers=4, name=Path(model.cfg).stem)  # 训练模型
                   from  n    params  module                                       arguments0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]2                  -1  1      6640  ultralytics.nn.modules.block.C3k2            [32, 64, 1, False, 0.25]      3                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]4                  -1  1     26080  ultralytics.nn.modules.block.C3k2            [64, 128, 1, False, 0.25]     5                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]6                  -1  1     87040  ultralytics.nn.modules.block.C3k2            [128, 128, 1, True]7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]8                  -1  1    346112  ultralytics.nn.modules.block.C3k2            [256, 256, 1, True]9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]10                  -1  1    249728  ultralytics.nn.modules.block.C2PSA           [256, 256, 1]11                  -1  1    197376  ultralytics.nn.modules.block.MHSA            [256, 14, 14, 4]12                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']13             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]14                  -1  1    111296  ultralytics.nn.modules.block.C3k2            [384, 128, 1, False]15                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']16             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]17                  -1  1     32096  ultralytics.nn.modules.block.C3k2            [256, 64, 1, False]18                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]19            [-1, 14]  1         0  ultralytics.nn.modules.conv.Concat           [1]20                  -1  1     86720  ultralytics.nn.modules.block.C3k2            [192, 128, 1, False]21                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]22            [-1, 11]  1         0  ultralytics.nn.modules.conv.Concat           [1]23                  -1  1    378880  ultralytics.nn.modules.block.C3k2            [384, 256, 1, True]24        [17, 20, 23]  1    464912  ultralytics.nn.modules.head.Detect           [80, [64, 128, 256]]
YOLO11_MHSA summary: 324 layers, 2,821,456 parameters, 2,821,440 gradients, 6.8 GFLOPs

在这里插入图片描述

3. 网络结构图

在这里插入图片描述

4. 结论与展望

本文提出了一种基于多头自注意力机制(MHSA)增强的YOLOv11主干网络结构,有效提升了目标检测模型的全局建模能力与检测精度,并在复杂场景中表现出更强的鲁棒性。
未来工作将探索以下方向:

  • 动态MHSA机制,根据输入内容自适应激活注意力模块;
  • 在检测头中引入交叉注意力机制,实现更精细的特征交互;
  • 轻量化MHSA模块设计,进一步压缩模型参数量。

相关文章:

  • Elasticsearch Fetch阶段面试题
  • Springboot构建项目时lombok不生效
  • 51单片机仿真突然出问题
  • Almalinux中出现ens33 ethernet 未托管 -- lo loopback 未托管 --如何处理:
  • 提示词定制-AI写方案太泛?用“5W1H”提问法,细节拉满!
  • 售前工作.工作流程和工具
  • 【八股战神篇】Java集合高频面试题
  • nodejs快速入门到精通1
  • C++:C++内存管理
  • 题单:表达式求值1
  • 什么是差分传输?
  • 信任的进阶:LEI与vLEI协同推进跨境支付体系变革
  • 深入理解构造函数,析构函数
  • C语言内存管理:深入理解堆与栈
  • OpenResty 深度解析:构建高性能 Web 服务的终极方案
  • SpringBootAdmin:全方位监控与管理SpringBoot应用
  • 第三十五节:特征检测与描述-ORB 特征
  • 【数据结构】_二叉树
  • LVGL(lv_dropdown下拉列表控件)
  • 系统架构设计(六):面向对象设计
  • 国际博物馆日|在辽宁省博物馆遇见敦煌
  • “先增聘再离任”又添一例,景顺长城基金经理鲍无可官宣辞职
  • 美联储主席:供应冲击或更频繁,将重新评估货币政策方法中的通胀和就业因素
  • 美国务院批准向土耳其出售导弹及相关部件,价值3.04亿美元
  • 外企聊营商|波音速度:创新审批促“起飞”
  • 珠峰窗口期5月开启 普通人登一次有多烧钱?