当前位置: 首页 > wzjs >正文

建设区服务网站沧州做网站的专业公司

建设区服务网站,沧州做网站的专业公司,锻件开发设计公司,国家允许哪几个网站做顺风车论文链接:https://arxiv.org/abs/2502.12524 代码链接:https://github.com/sunsmarterjie/yolov12 文章摘要: 长期以来,增强YOLO框架的网络架构一直至关重要,但一直专注于基于cnn的改进,尽管注意力机制在建…


论文链接:https://arxiv.org/abs/2502.12524

代码链接:https://github.com/sunsmarterjie/yolov12


 文章摘要:

        长期以来,增强YOLO框架的网络架构一直至关重要,但一直专注于基于cnn的改进,尽管注意力机制在建模能力方面已被证明具有优越性。这是因为基于注意力的模型无法匹配基于cnn的模型的速度。本文提出了一种以注意力为中心的YOLO框架,即YOLOv12,与之前基于cnn的YOLO框架的速度相匹配,同时利用了注意力机制的性能优势。YOLOv12在精度和速度方面超越了所有流行的实时目标检测器。例如,YOLOv12-N在T4 GPU上以1.64ms的推理延迟实现了40.6% mAP,以相当的速度超过了高级的YOLOv10-N / YOLOv11-N 2.1%/1.2% mAP。这种优势可以扩展到其他模型规模。YOLOv12还超越了改善DETR的端到端实时检测器,如RT-DETR /RT-DETRv2: YOLOv12- s比RT-DETR- r18 / RT-DETRv2-r18运行更快42%,仅使用36%的计算和45%的参数。更多的比较见图1。

总结:作者围提出YOLOv12目标检测模型,测试结果更快、更强,围绕注意力机制进行创新。


一、创新点总结

        作者构建了一个以注意力为核心构建了YOLOv12检测模型,主要创新点创新点如下:

        1、提出一种简单有效的区域注意力机制(area-attention)。

        2、提出一种高效的聚合网络结构R-ELAN。

        作者提出的area-attention代码如下:

class AAttn(nn.Module):"""Area-attention module with the requirement of flash attention.Attributes:dim (int): Number of hidden channels;num_heads (int): Number of heads into which the attention mechanism is divided;area (int, optional): Number of areas the feature map is divided. Defaults to 1.Methods:forward: Performs a forward process of input tensor and outputs a tensor after the execution of the area attention mechanism.Examples:>>> import torch>>> from ultralytics.nn.modules import AAttn>>> model = AAttn(dim=64, num_heads=2, area=4)>>> x = torch.randn(2, 64, 128, 128)>>> output = model(x)>>> print(output.shape)Notes: recommend that dim//num_heads be a multiple of 32 or 64."""def __init__(self, dim, num_heads, area=1):"""Initializes the area-attention module, a simple yet efficient attention module for YOLO."""super().__init__()self.area = areaself.num_heads = num_headsself.head_dim = head_dim = dim // num_headsall_head_dim = head_dim * self.num_headsself.qkv = Conv(dim, all_head_dim * 3, 1, act=False)self.proj = Conv(all_head_dim, dim, 1, act=False)self.pe = Conv(all_head_dim, dim, 7, 1, 3, g=dim, act=False)def forward(self, x):"""Processes the input tensor 'x' through the area-attention"""B, C, H, W = x.shapeN = H * Wqkv = self.qkv(x).flatten(2).transpose(1, 2)if self.area > 1:qkv = qkv.reshape(B * self.area, N // self.area, C * 3)B, N, _ = qkv.shapeq, k, v = qkv.view(B, N, self.num_heads, self.head_dim * 3).split([self.head_dim, self.head_dim, self.head_dim], dim=3)# if x.is_cuda:#     x = flash_attn_func(#         q.contiguous().half(),#         k.contiguous().half(),#         v.contiguous().half()#     ).to(q.dtype)# else:q = q.permute(0, 2, 3, 1)k = k.permute(0, 2, 3, 1)v = v.permute(0, 2, 3, 1)attn = (q.transpose(-2, -1) @ k) * (self.head_dim ** -0.5)max_attn = attn.max(dim=-1, keepdim=True).valuesexp_attn = torch.exp(attn - max_attn)attn = exp_attn / exp_attn.sum(dim=-1, keepdim=True)x = (v @ attn.transpose(-2, -1))x = x.permute(0, 3, 1, 2)v = v.permute(0, 3, 1, 2)if self.area > 1:x = x.reshape(B // self.area, N * self.area, C)v = v.reshape(B // self.area, N * self.area, C)B, N, _ = x.shapex = x.reshape(B, H, W, C).permute(0, 3, 1, 2)v = v.reshape(B, H, W, C).permute(0, 3, 1, 2)x = x + self.pe(v)x = self.proj(x)return x

         结构上与YOLOv11里C2PSA中的模式相似,使用了Flash-attn进行运算加速。Flash-attn安装时需要找到与cuda、torch和python解释器对应的版本,Windows用户可用上述代码替换官方代码的AAttn代码,无需安装Flash-attn。

        R-ELAN结构如下图所示:

        作者基于该结构构建了A2C2f模块,与C2f/C3K2模块结构类似,代码如下:


class AAttn(nn.Module):"""Area-attention module with the requirement of flash attention.Attributes:dim (int): Number of hidden channels;num_heads (int): Number of heads into which the attention mechanism is divided;area (int, optional): Number of areas the feature map is divided. Defaults to 1.Methods:forward: Performs a forward process of input tensor and outputs a tensor after the execution of the area attention mechanism.Examples:>>> import torch>>> from ultralytics.nn.modules import AAttn>>> model = AAttn(dim=64, num_heads=2, area=4)>>> x = torch.randn(2, 64, 128, 128)>>> output = model(x)>>> print(output.shape)Notes: recommend that dim//num_heads be a multiple of 32 or 64."""def __init__(self, dim, num_heads, area=1):"""Initializes the area-attention module, a simple yet efficient attention module for YOLO."""super().__init__()self.area = areaself.num_heads = num_headsself.head_dim = head_dim = dim // num_headsall_head_dim = head_dim * self.num_headsself.qkv = Conv(dim, all_head_dim * 3, 1, act=False)self.proj = Conv(all_head_dim, dim, 1, act=False)self.pe = Conv(all_head_dim, dim, 7, 1, 3, g=dim, act=False)def forward(self, x):"""Processes the input tensor 'x' through the area-attention"""B, C, H, W = x.shapeN = H * Wqkv = self.qkv(x).flatten(2).transpose(1, 2)if self.area > 1:qkv = qkv.reshape(B * self.area, N // self.area, C * 3)B, N, _ = qkv.shapeq, k, v = qkv.view(B, N, self.num_heads, self.head_dim * 3).split([self.head_dim, self.head_dim, self.head_dim], dim=3)# if x.is_cuda:#     x = flash_attn_func(#         q.contiguous().half(),#         k.contiguous().half(),#         v.contiguous().half()#     ).to(q.dtype)# else:q = q.permute(0, 2, 3, 1)k = k.permute(0, 2, 3, 1)v = v.permute(0, 2, 3, 1)attn = (q.transpose(-2, -1) @ k) * (self.head_dim ** -0.5)max_attn = attn.max(dim=-1, keepdim=True).valuesexp_attn = torch.exp(attn - max_attn)attn = exp_attn / exp_attn.sum(dim=-1, keepdim=True)x = (v @ attn.transpose(-2, -1))x = x.permute(0, 3, 1, 2)v = v.permute(0, 3, 1, 2)if self.area > 1:x = x.reshape(B // self.area, N * self.area, C)v = v.reshape(B // self.area, N * self.area, C)B, N, _ = x.shapex = x.reshape(B, H, W, C).permute(0, 3, 1, 2)v = v.reshape(B, H, W, C).permute(0, 3, 1, 2)x = x + self.pe(v)x = self.proj(x)return xclass ABlock(nn.Module):"""ABlock class implementing a Area-Attention block with effective feature extraction.This class encapsulates the functionality for applying multi-head attention with feature map are dividing into areasand feed-forward neural network layers.Attributes:dim (int): Number of hidden channels;num_heads (int): Number of heads into which the attention mechanism is divided;mlp_ratio (float, optional): MLP expansion ratio (or MLP hidden dimension ratio). Defaults to 1.2;area (int, optional): Number of areas the feature map is divided.  Defaults to 1.Methods:forward: Performs a forward pass through the ABlock, applying area-attention and feed-forward layers.Examples:Create a ABlock and perform a forward pass>>> model = ABlock(dim=64, num_heads=2, mlp_ratio=1.2, area=4)>>> x = torch.randn(2, 64, 128, 128)>>> output = model(x)>>> print(output.shape)Notes: recommend that dim//num_heads be a multiple of 32 or 64."""def __init__(self, dim, num_heads, mlp_ratio=1.2, area=1):"""Initializes the ABlock with area-attention and feed-forward layers for faster feature extraction."""super().__init__()self.attn = AAttn(dim, num_heads=num_heads, area=area)mlp_hidden_dim = int(dim * mlp_ratio)self.mlp = nn.Sequential(Conv(dim, mlp_hidden_dim, 1), Conv(mlp_hidden_dim, dim, 1, act=False))self.apply(self._init_weights)def _init_weights(self, m):"""Initialize weights using a truncated normal distribution."""if isinstance(m, nn.Conv2d):trunc_normal_(m.weight, std=.02)if isinstance(m, nn.Conv2d) and m.bias is not None:nn.init.constant_(m.bias, 0)def forward(self, x):"""Executes a forward pass through ABlock, applying area-attention and feed-forward layers to the input tensor."""x = x + self.attn(x)x = x + self.mlp(x)return xclass A2C2f(nn.Module):  """A2C2f module with residual enhanced feature extraction using ABlock blocks with area-attention. Also known as R-ELANThis class extends the C2f module by incorporating ABlock blocks for fast attention mechanisms and feature extraction.Attributes:c1 (int): Number of input channels;c2 (int): Number of output channels;n (int, optional): Number of 2xABlock modules to stack. Defaults to 1;a2 (bool, optional): Whether use area-attention. Defaults to True;area (int, optional): Number of areas the feature map is divided. Defaults to 1;residual (bool, optional): Whether use the residual (with layer scale). Defaults to False;mlp_ratio (float, optional): MLP expansion ratio (or MLP hidden dimension ratio). Defaults to 1.2;e (float, optional): Expansion ratio for R-ELAN modules. Defaults to 0.5.g (int, optional): Number of groups for grouped convolution. Defaults to 1;shortcut (bool, optional): Whether to use shortcut connection. Defaults to True;Methods:forward: Performs a forward pass through the A2C2f module.Examples:>>> import torch>>> from ultralytics.nn.modules import A2C2f>>> model = A2C2f(c1=64, c2=64, n=2, a2=True, area=4, residual=True, e=0.5)>>> x = torch.randn(2, 64, 128, 128)>>> output = model(x)>>> print(output.shape)"""def __init__(self, c1, c2, n=1, a2=True, area=1, residual=False, mlp_ratio=2.0, e=0.5, g=1, shortcut=True):super().__init__()c_ = int(c2 * e)  # hidden channelsassert c_ % 32 == 0, "Dimension of ABlock be a multiple of 32."# num_heads = c_ // 64 if c_ // 64 >= 2 else c_ // 32num_heads = c_ // 32self.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv((1 + n) * c_, c2, 1)  # optional act=FReLU(c2)init_values = 0.01  # or smallerself.gamma = nn.Parameter(init_values * torch.ones((c2)), requires_grad=True) if a2 and residual else Noneself.m = nn.ModuleList(nn.Sequential(*(ABlock(c_, num_heads, mlp_ratio, area) for _ in range(2))) if a2 else C3k(c_, c_, 2, shortcut, g) for _ in range(n))def forward(self, x):"""Forward pass through R-ELAN layer."""y = [self.cv1(x)]y.extend(m(y[-1]) for m in self.m)if self.gamma is not None:return x + (self.gamma * self.cv2(torch.cat(y, 1)).permute(0, 2, 3, 1)).permute(0, 3, 1, 2)return self.cv2(torch.cat(y, 1))

        模型结构图如下:


后续明天再写 — 。— !


文章转载自:

http://hJIihyGq.rmchq.cn
http://ddcCtN5H.rmchq.cn
http://YFElc2k9.rmchq.cn
http://T3Tm9OIU.rmchq.cn
http://0ZyEcjUl.rmchq.cn
http://DKC6AVXJ.rmchq.cn
http://xRzzL28u.rmchq.cn
http://IreDr4be.rmchq.cn
http://qGRMZCPL.rmchq.cn
http://2YSCis32.rmchq.cn
http://140ZvojF.rmchq.cn
http://BM0xjzR7.rmchq.cn
http://tqarANIN.rmchq.cn
http://cneiYvGC.rmchq.cn
http://34p0tLJy.rmchq.cn
http://ErmkO1i8.rmchq.cn
http://VzU7ye6Q.rmchq.cn
http://wcRk2QgC.rmchq.cn
http://rmBasI4k.rmchq.cn
http://meWieVcE.rmchq.cn
http://1LEKrV1n.rmchq.cn
http://gZsmoPxW.rmchq.cn
http://LoqLr9JQ.rmchq.cn
http://nCly3hxm.rmchq.cn
http://VqUeRc1g.rmchq.cn
http://CxxeLhPo.rmchq.cn
http://lPbzlaDK.rmchq.cn
http://1wsannjx.rmchq.cn
http://YvUAjAmm.rmchq.cn
http://sbHMqmk8.rmchq.cn
http://www.dtcms.com/wzjs/734715.html

相关文章:

  • 绵阳做网站的网站备案后更换主机
  • 上饶招聘网站建设公司湖南外发加工网
  • 互联网科技公司做网站哪家好成都网站开发建设公司
  • 如何做英文网站的外链网站设计有哪些创新点
  • 深圳网站建设网站设计软文推广建设旅游网站的费用预算
  • 学网站开发好吗厨师培训机构 厨师短期培训班
  • 梅江区建设局网站微信用什么小程序可以提取文字
  • 网站建设费用 百度文库wordpress分类目录 模版
  • 珠海手机网站建设公司安的网络网站建设
  • 网站建设hnshangtian郑州市招投标信息网
  • 网站备案密码重置seo网站建设课程
  • 江西省建设职业培训学校网站西安做公司网站的公司
  • 怎么做内网网站wordpress萌主题下载地址
  • 网站的建设流程图沈阳网页模板建站
  • 软膜做网站有用吗注册一个公司网站的费用
  • 网站建设公司知名推荐网站制作公司
  • 横沥东莞网站建设为什么企业要建设自己的企业文化
  • 赣州制作网站百度湖南省军区强军网网站群建设项目
  • 八步网站建设wordpress 买数据库
  • 焦作做微信网站多少钱wordpress商用收费不
  • 河南省汝州市文明建设门户网站企业网站发布图片文章
  • 学校建设评建工作网站网站 模板 php
  • 做宣传的网站网站建设流程报价
  • 购物网站每个模块主要功能免费图表制作网站
  • ppt做长图网站网店推广目的
  • 网站怎么做跟踪链接自己做的网站放在服务器哪里
  • 申请免费网站主页空间广州市专业做网站
  • 买卖域名的网站好网络科技公司网站制作
  • ae做动画教程网站十大软件排行榜
  • 网络营销的原理南阳网站优化公司