当前位置: 首页 > news >正文

YOLO融合[ICLR2025]PolaFormer中的极性感知线性注意力


YOLOv11v10v8使用教程:  YOLOv11入门到入土使用教程

YOLOv11改进汇总贴:YOLOv11及自研模型更新汇总 


《PolaFormer: Polarity-aware Linear Attention for Vision Transformers》

一、 模块介绍

        论文链接:https://arxiv.org/pdf/2501.15061

        代码链接:https://github.com/ZacharyMeng/PolaFormer/tree/main

论文速览:

        线性注意力机制已作为基于 softmax 的注意力机制的一种有前景的替代方案出现,它利用核化特征图将复杂度从序列长度的二次降低到线性。然而,特征图的非负约束以及近似中使用的松弛指数函数导致与原始查询-键点积相比存在显著的信息损失,从而产生具有更高熵的判别能力较弱的注意力图。为了解决查询-键对中负值驱动的缺失交互,我们提出了一种极性感知的线性注意力机制,该机制明确地对同符号和异符号的查询-键交互进行建模,确保关系信息的全面覆盖。此外,为了恢复注意力图的尖峰特性,我们提供了一种理论分析,证明存在一类元素级函数(具有正的一阶和二阶导数),能够降低注意力分布的熵。为了简化,并认识到每个维度的独特贡献,我们采用可学习的幂函数进行缩放,从而允许强注意力和弱注意力信号。

总结:本文更新其中极性感知的线性注意力机制。​


⭐⭐本文二创模块仅更新于付费群中,往期免费教程可看下方链接⭐⭐

YOLOv11及自研模型更新汇总(含免费教程)文章浏览阅读366次,点赞3次,收藏4次。群文件2024/11/08日更新。,群文件2024/11/08日更新。_yolo11部署自己的数据集https://xy2668825911.blog.csdn.net/article/details/143633356

二、二创融合模块

2.1 相关代码

# PolaFormer: Polarity-aware Linear Attention for Vision Transformers
# https://blog.csdn.net/StopAndGoyyy?spm=1011.2124.3001.5343
class PolaLinearAttention(nn.Module):def __init__(self, dim, num_patches=2, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.,sr_ratio=1,kernel_size=5, alpha=4):super().__init__()assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."self.dim = dimself.num_heads = num_headshead_dim = dim // num_headsself.head_dim = head_dimself.qg = nn.Linear(dim, 2 * dim, bias=qkv_bias)self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)self.attn_drop = nn.Dropout(attn_drop)self.proj = nn.Linear(dim, dim)self.proj_drop = nn.Dropout(proj_drop)self.sr_ratio = sr_ratioif sr_ratio > 1:self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)self.norm = nn.LayerNorm(dim)self.dwc = nn.Conv2d(in_channels=head_dim, out_channels=head_dim, kernel_size=kernel_size,groups=head_dim, padding=kernel_size // 2)self.power = nn.Parameter(torch.zeros(size=(1, self.num_heads, 1, self.head_dim)))self.alpha = alphaself.scale = nn.Parameter(torch.zeros(size=(1, 1, dim)))# self.positional_encoding = nn.Parameter(torch.zeros(size=(1, num_patches // (sr_ratio * sr_ratio), dim)))self.positional_encoding = nn.Parameter(torch.zeros(size=(1, 1, dim)))print('Linear Attention sr_ratio{} f{} kernel{}'.format(sr_ratio, alpha, kernel_size))def forward(self, x):B, C, H, W = x.shapex = x.view(B, C, -1)x = x.transpose(2, 1)B, N, C = x.shapeq, g = self.qg(x).reshape(B, N, 2, C).unbind(2)if self.sr_ratio > 1:x_ = x.permute(0, 2, 1).reshape(B, C, H, W)x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1)x_ = self.norm(x_)kv = self.kv(x_).reshape(B, -1, 2, C).permute(2, 0, 1, 3)else:kv = self.kv(x).reshape(B, -1, 2, C).permute(2, 0, 1, 3)k, v = kv[0], kv[1]n = k.shape[1]k = k + self.positional_encodingkernel_function = nn.ReLU()scale = nn.Softplus()(self.scale)power = 1 + self.alpha * nn.functional.sigmoid(self.power)q = q / scalek = k / scaleq = q.reshape(B, N, self.num_heads, -1).permute(0, 2, 1, 3).contiguous()k = k.reshape(B, n, self.num_heads, -1).permute(0, 2, 1, 3).contiguous()v = v.reshape(B, n, self.num_heads, -1).permute(0, 2, 1, 3).contiguous()q_pos = kernel_function(q) ** powerq_neg = kernel_function(-q) ** powerk_pos = kernel_function(k) ** powerk_neg = kernel_function(-k) ** powerq_sim = torch.cat([q_pos, q_neg], dim=-1)q_opp = torch.cat([q_neg, q_pos], dim=-1)k = torch.cat([k_pos, k_neg], dim=-1)v1, v2 = torch.chunk(v, 2, dim=-1)z = 1 / (q_sim @ k.mean(dim=-2, keepdim=True).transpose(-2, -1) + 1e-6)kv = (k.transpose(-2, -1) * (n ** -0.5)) @ (v1 * (n ** -0.5))x_sim = q_sim @ kv * zz = 1 / (q_opp @ k.mean(dim=-2, keepdim=True).transpose(-2, -1) + 1e-6)kv = (k.transpose(-2, -1) * (n ** -0.5)) @ (v2 * (n ** -0.5))x_opp = q_opp @ kv * zx = torch.cat([x_sim, x_opp], dim=-1)x = x.transpose(1, 2).reshape(B, N, C)if self.sr_ratio > 1:v = nn.functional.interpolate(v.transpose(-2, -1).reshape(B * self.num_heads, -1, n), size=N,mode='linear').reshape(B, self.num_heads, -1, N).transpose(-2, -1)v = v.reshape(B * self.num_heads, H, W, -1).permute(0, 3, 1, 2)v = self.dwc(v).reshape(B, C, N).permute(0, 2, 1)x = x + vx = x * gx = self.proj(x)x = self.proj_drop(x)return x.reshape(B, C, H, W)

2.2更改yaml文件 (以自研模型为例)

yam文件解读:YOLO系列 “.yaml“文件解读_yolo yaml文件-CSDN博客

       打开更改ultralytics/cfg/models/11路径下的YOLOv11.yaml文件,替换原有模块。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# ⭐⭐Powered by https://blog.csdn.net/StopAndGoyyy,  技术指导QQ:2668825911⭐⭐# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 377 layers, 2,249,525 parameters, 2,249,509 gradients, 8.7 GFLOPs/258 layers, 2,219,405 parameters, 0 gradients, 8.5 GFLOPss: [0.50, 0.50, 1024] # summary: 377 layers, 8,082,389 parameters, 8,082,373 gradients, 29.8 GFLOPs/258 layers, 7,972,885 parameters, 0 gradients, 29.2 GFLOPsm: [0.50, 1.00, 512] # summary:  377 layers, 20,370,221 parameters, 20,370,205 gradients, 103.0 GFLOPs/258 layers, 20,153,773 parameters, 0 gradients, 101.2 GFLOPsl: [1.00, 1.00, 512] # summary: 521 layers, 23,648,717 parameters, 23,648,701 gradients, 124.5 GFLOPs/330 layers, 23,226,989 parameters, 0 gradients, 121.2 GFLOPsx: [1.00, 1.50, 512] # summary: 521 layers, 53,125,237 parameters, 53,125,221 gradients, 278.9 GFLOPs/330 layers, 52,191,589 parameters, 0 gradients, 272.1 GFLOPs#  n: [0.33, 0.25, 1024]
#  s: [0.50, 0.50, 1024]
#  m: [0.67, 0.75, 768]
#  l: [1.00, 1.00, 512]
#  x: [1.00, 1.25, 512]
# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, RCRep2A, [128, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 1, PolaLinearAttention, []]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 4, RCRep2A, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, RCRep2A, [1024, True]]- [-1, 1, SPPF_WD, [1024, 7]] # 9# YOLO11n head
head:- [[3, 5, 7], 1, align_3In, [256, 1]] # 10- [[4, 6, 9], 1, align_3In, [256, 1]] # 11- [[-1, -2], 1, Concat, [1]] #12  cat- [-1, 1, RepVGGBlocks, []] #13- [-1, 1, nn.Upsample, [None, 2, "nearest"]] #14- [[-1, 4], 1, Concat, [1]] #15 cat- [-1, 1, Conv, [256, 3]] # 16- [13, 1, Conv, [512, 3]] #17- [13, 1, Conv, [1024, 3, 2]] #18- [[16, 17, 18], 1, Detect, [nc]] # Detect(P3, P4, P5)# ⭐⭐Powered by https://blog.csdn.net/StopAndGoyyy,  技术指导QQ:2668825911⭐⭐

 2.3 修改train.py文件

       创建Train脚本用于训练。

from ultralytics.models import YOLO
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'if __name__ == '__main__':model = YOLO(model='ultralytics/cfg/models/xy_YOLO/xy_yolov1.yaml')# model = YOLO(model='ultralytics/cfg/models/11/yolo11l.yaml')model.train(data='./datasets/data.yaml', epochs=1, batch=1, device='0', imgsz=320, workers=1, cache=False,amp=True, mosaic=False, project='run/train', name='exp',)

​         在train.py脚本中填入修改好的yaml路径,运行即可训练,数据集创建教程见下方链接。

YOLOv11入门到入土使用教程(含结构图)_yolov11使用教程-CSDN博客


http://www.dtcms.com/a/270274.html

相关文章:

  • docker proxy
  • C 解压文件
  • Day55 序列预测任务介绍
  • Subject vs Flowable vs Observable 对比
  • 【零基础学AI】第31讲:目标检测 - YOLO算法
  • 每日算法刷题Day44 7.8:leetcode前缀和4道题,用时1h40min
  • JVM 为什么使用元空间(Metaspace)替换了永久代(PermGen)?——深入理解 Java 方法区与类元数据存储的演进
  • 视频能转成gif动图吗?怎么弄?
  • [NOIP][C++]洛谷P1376 [USACO05MAR] Yogurt factory 机器工厂
  • 没合适的组合wheel包,就自行编译flash_attn吧
  • 行业实践案例:金融行业数据治理体系全景解析
  • Java 关键字详解:掌握所有保留关键字的用途与最佳实践
  • Apache Atlas编译打包,可运行包下载地址
  • DMA技术与音频数据的存储和播放
  • C++STL-vector
  • 【c++学习记录】状态模式,实现一个登陆功能
  • 笔试——Day1
  • numpy数据分析知识总结
  • VMware Workstation不可恢复错误:(vmx)点击设置闪退解决
  • [2-02-02].第03节:环境搭建 - Win10搭建ES集群环境
  • 一天一道Sql题(day03)
  • Choreographer
  • 基于大模型的心肌炎全病程风险预测与诊疗方案研究
  • 使用git生成ssh的ed25519密钥
  • 鲁成伟业精彩亮相第六届中国国际无人机及无人系统博览会
  • 一个vue项目的基本构成
  • DCL学习
  • 操作系统:基本概念
  • Java结构型模式---适配器模式
  • 蓝桥杯 第十六届(2025)真题思路复盘解析