当前位置: 首页 > news >正文

深度学习项目--基于SE的ResNet50V2网络结构探究实验

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 注意力机制现在运用非常广泛,这一篇也是本人第一次学习注意力机制,做笔记记录;
  • 实验:本人探索SE注意力机制结合ResNet50V2,在10轮的训练下,效果略好于ResNet50V2;
  • 欢迎收藏 + 关注,本人将会持续更新

参考资料

  • https://blog.csdn.net/m0_37605642/article/details/135958384
  • https://zhuanlan.zhihu.com/p/631398525

文章目录

  • 1、知识学习
    • 1.1、注意力机制简介
    • 1.2、注意力机制原理
    • 1.3、SENet
  • 2、SE + ResNet探索实验
    • 1、导入数据
      • 1、导入库
      • 2、查看数据信息和导入数据
      • 3、展示数据
      • 4、数据导入
      • 5、数据划分
      • 6、动态加载数据
    • 2、构建ResNet-50V2网络
      • 1、SE模块的搭建
      • 2、搭建ResNetV2
      • 3、搭建SE + ResNetV2
    • 3、模型训练
      • 1、构建训练集
      • 2、构建测试集
      • 3、设置超参数
    • 4、模型训练
      • 1、ResNet50V2
      • 2、SE + ResNet50V2
    • 5、结果可视化

1、知识学习

1.1、注意力机制简介

先介绍两个名词:随意线索,不随意线索,如图所示:

在这里插入图片描述

  • 不随意线索:不随着自己的意思,就像上图左边所示,注意力集中在红色杯子;
  • 不随意线索:随着自己的意思,带有目的性的,如图右边所示,注意力集中在书本。

1.2、注意力机制原理

🔥 这里只是介绍最原始的注意力机制,也称为:注意力分数。

📝 在cv中,卷积、全连接、池化层都只考虑不随意线索,卷积用卷积核提取特征,池化降维,每一次都是规律性质的运算。

注意力机制,本人再看概念的时候也懵懵懂懂的,但是结合实际例子就好多了,首先这里先介绍几个名词:

  • Source:一些列<Key,Value>集合,键值对
  • Query:随意线索,也可以理解为目标值;
  • Key,Value:Source中的Key、Value值;
  • 权重系数:Query、Key的相关性,有相应的数学公式计算得出;
  • Attention值:对Value进行加权求和。

有点抽象,先看个图,举个例子就好了:

在这里插入图片描述

目标:将中文的“我”,翻译成“me”

在这里的Query、Key、Value可以理解为:

  • Query:随意线索,这里就是“我”,因为目标是将“我”翻译成“me”
  • Key:重要特征值,计算Query与key的相关程度,就可以知道他与 哪一个Value值相匹配程度就更大了,因为Key和Value是一一对应的;
  • Value:与Key成对出现,Query与哪一个特征Key相关性大,结果就是那一个Value。

具体操作

  1. 先根据 Query,Key计算两者的相关性,然后再通过 softmax 函数得到 注意力分数, softmax 函数是为了将输出值映射到[0, 1]之间。
    • Key、Query相关性计算公式:

    • 在这里插入图片描述

    • α ( q , k i ) \alpha(q, k_i) α(q,ki) 有很多变体,比如:加性注意力、缩放点积注意力等等。

    • 加性注意力计算: α ( q , k i ) = w v T tanh ⁡ ( W q q + W k k ) \alpha(q, k_i) = w_v^T \tanh(W_q q + W_k k) α(q,ki)=wvTtanh(Wqq+Wkk) , Wq,Wk是可训练的矩阵,将查询 (q) 和键 (k) 映射到新的空间,以捕捉它们之间的关系,可训练矩阵比较难理解,这个就是一个一个权重,相当于一元一次方程的斜率K,计算过程如下,用通义解释了一下,小编看来还是很容易理解的:

    • 在这里插入图片描述

    • 缩放点积注意力计算: α ( q , k i ) = Q K T d \alpha(q, k_i) = \frac{Q K^T}{\sqrt{d}} α(q,ki)=d QKT, 将Q,K相乘得来。

  2. 根据注意力分数进行加权求和,得到带注意力分数的 Value,以方便进行下游任务,在(1)中,我们得到了Query,Key的相关性,如果相关性越大,注意力分数就越高,反之越低;然后将注意力分数乘以对应的 Value,再进行加权求和;就比如:"我"和"me"的相关性较大,注意力分数就会越高;这样可以让下游任务理解"我"和"me"是匹配程度高,输出me。

1.3、SENet

通道注意力机制,顾名思义,通道注意力机制是通过计算每个通道channel的重要性程度;SENet是经典的一个网络结构。SENet通过学习通道间的关系(每个通道的重要性),提升了网络在特征表示中的表达能力,进而提升了模型的性能。

在这里插入图片描述

SENet三步核心操作解释


1. Squeeze(挤压操作)

目的:将空间维度的信息压缩为全局特征,提取每个通道的全局统计信息。
过程

  • 全局平均池化:对输入特征图 ( x ) 进行全局平均池化(Global Average Pooling),将每个通道的空间信息(高度和宽度)压缩为一个标量值。
  • 输出形状:例如,输入特征图形状为 ( (1, 32, 32, 10) ),经过挤压后输出形状变为 ( (1, 1, 1, 10) )。
  • 作用:保留每个通道的整体空间分布信息,形成全局特征向量 ( z ),用于后续分析通道间的依赖关系。

2. Excitation(激励操作)

目的:学习通道间的依赖关系,生成每个通道的权重,量化其重要性。
过程

  • 全连接层结构
    1. 降维层:第一个全连接层使用ReLU激活函数,将全局特征向量 ( z ) 的维度降低。
    2. 恢复维度层:第二个全连接层将特征恢复到原始通道数(例如从5维恢复到10维),并使用Sigmoid激活函数,生成通道权重 ( s ),值域为[0,1],这也说明了前后通道数量不变
  • 数学表达式
    s = F e x ( z ) = σ ( W 2 ⋅ ReLU ( W 1 ⋅ z ) ) s = F_{ex}(z) = \sigma\left( W_2 \cdot \text{ReLU}\left( W_1 \cdot z \right) \right) s=Fex(z)=σ(W2ReLU(W1z))
    其中,( W 1   W_1 \ W1 ) 和 ( W 2   W_2 \ W2 ) 是可学习的权重矩阵,( σ \sigma σ ) 是Sigmoid函数。
  • 作用:通过权重 ( s ) 反映每个通道对当前任务的贡献度(如重要通道的权重接近1,次要通道接近0)。

3. Scale(标尺操作)

目的:根据通道权重重新标定原始特征,增强重要通道,抑制次要通道,这样就更好注意重要特征了,**也就是上面说到的 随意线索 **。
过程

  • 通道级加权:将激励操作生成的权重 ( s ) 与原始特征图 ( x ) 逐通道相乘
  • 公式 y = x ⊙ s y = x \odot s y=xs,其中,( ⊙ \odot ) 表示逐元素乘法,( y ) 是重新标定后的特征图。
  • 效果:重要通道的特征被放大,次要通道的特征被抑制,从而提升模型对关键特征的敏感性。

三步协同作用

  1. Squeeze 提取全局特征,为通道间关系的学习提供基础。
  2. Excitation 通过全连接层学习通道间的依赖,生成动态权重。
  3. Scale 将权重反馈到原始特征,实现通道级的特征重标定

网络结构

SENet是很灵活的,简单来说,可以加到任何卷积层后,但是加多了也可能过拟合😃😃😃😃😃😃,

下图是嫁到Inception、Residual中:

在这里插入图片描述

代码复现:

import torch.nn.functional as F

class SE(nn.Module):
    def __init__(self, in_channels, ratio):
        super().__init__()
        # 压缩
        self.squeeze = nn.AdaptiveAvgPool2d((1,1))
        # 生成特征权重
        self.compress = nn.Conv2d(in_channels, in_channels // ratio, 1, 1, 0)  # 通道降维
        self.excitation = nn.Conv2d(in_channels // ratio, in_channels, 1, 1, 0)  # 通道回升
        
    def forward(self, x):
        t = x
        x = self.squeeze(x)
        x = self.compress(x)
        x = F.relu(x)
        x = self.excitation(x)
        return t * F.sigmoid(x)

下面是SE + ResNet模型探索,数据:猴痘病图片;

2、SE + ResNet探索实验

1、导入数据

1、导入库

import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib 

# 设置设备
device = "cuda" if torch.cuda.is_available() else "cpu"

device 
'cuda'

2、查看数据信息和导入数据

数据目录有两个文件:一个数据文件,一个权重。

data_dir = "./data/"

data_dir = pathlib.Path(data_dir)

# 类别数量
classnames = [str(path).split("\\")[0] for path in os.listdir(data_dir)]

classnames
['Monkeypox', 'Others']

3、展示数据

import matplotlib.pylab as plt  
from PIL import Image 

# 获取文件名称
data_path_name = "./data/Monkeypox/"
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]

# 创建画板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))

for ax, img_file in zip(axes.flat, data_path_list):
    path_name = os.path.join(data_path_name, img_file)
    img = Image.open(path_name) # 打开
    # 显示
    ax.imshow(img)
    ax.axis('off')
    
plt.show()


在这里插入图片描述

4、数据导入

from torchvision import transforms, datasets 

# 数据统一格式
img_height = 224
img_width = 224 

data_tranforms = transforms.Compose([
    transforms.Resize([img_height, img_width]),
    transforms.ToTensor(),
    transforms.Normalize(   # 归一化
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225] 
    )
])

# 加载所有数据
total_data = datasets.ImageFolder(root="./data/", transform=data_tranforms)

5、数据划分

# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size 

train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、动态加载数据

batch_size = 32 

train_dl = torch.utils.data.DataLoader(
    train_data,
    batch_size=batch_size,
    shuffle=True
)

test_dl = torch.utils.data.DataLoader(
    test_data,
    batch_size=batch_size,
    shuffle=False
)
# 查看数据维度
for data, labels in train_dl:
    print("data shape[N, C, H, W]: ", data.shape)
    print("labels: ", labels)
    break
data shape[N, C, H, W]:  torch.Size([32, 3, 224, 224])
labels:  tensor([0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1,
        0, 1, 1, 1, 1, 0, 1, 0])

2、构建ResNet-50V2网络

1、SE模块的搭建

import torch.nn.functional as F

class SE(nn.Module):
    def __init__(self, in_channels, ratio):
        super().__init__()
        # 压缩
        self.squeeze = nn.AdaptiveAvgPool2d((1,1))
        # 生成特征权重
        self.compress = nn.Conv2d(in_channels, in_channels // ratio, 1, 1, 0)  # 通道降维
        self.excitation = nn.Conv2d(in_channels // ratio, in_channels, 1, 1, 0)  # 通道回升
        
    def forward(self, x):
        t = x
        x = self.squeeze(x)
        x = self.compress(x)
        x = F.relu(x)
        x = self.excitation(x)
        return t * F.sigmoid(x)
# 测试
se = SE(in_channels=64, ratio=16)
output = se(torch.randn(32, 64, 32, 32))
print(output.shape)  
torch.Size([32, 64, 32, 32])

在这里插入图片描述

2、搭建ResNetV2

'''  
conv_shortcut: 采用什么样的残差连接,对应上面图的1、3模块
filters: 输出通道数
卷积核:默认为3
'''
class Block2(nn.Module):
    def __init__(self, in_channel, filters, kernel_size=3, stride=1, conv_shortcut=False):
        super().__init__()
        
        # 第一个,preact,对应上图的前两层,bn、relu
        self.preact = nn.Sequential(
            nn.BatchNorm2d(in_channel),
            nn.ReLU(True)
        )
        
        # 判断是否需要使用残差连接,上图展示的网络中,有3个模块,有两个有残差连接,有一个没有,没有的那一块卷积核为 1
        self.shortcut = conv_shortcut
        if self.shortcut:   # 对应上图的第一块网络结构残差连接
            self.short = nn.Conv2d(in_channel, 4 * filters, kernel_size=1, stride=stride, padding=0, bias=False)  # padding默认为0, 4 * filtersz看源码得出,  输出通道
        else:
            self.short = nn.MaxPool2d(kernel_size=1, stride=stride, padding=0) if stride > 1 else nn.Identity()  # nn.Identity() 对输入的数据X,不做任何操作
        
        # 后面结果,三个模块都一样,我把他分层三个模块
        # 模块一,看源码
        self.conv1 = nn.Sequential(
            nn.Conv2d(in_channel, filters, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(filters),
            nn.ReLU(True)
        )
        
        # 模块二
        self.conv2 = nn.Sequential(
            nn.Conv2d(filters, filters, kernel_size=kernel_size, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(filters),
            nn.ReLU(True)
        )
        
        # 模块三
        self.conv3 = nn.Conv2d(filters, 4 * filters, kernel_size=1, stride=1)
        
    def forward(self, x):
        # 数据
        x1 = self.preact(x)
        if self.shortcut:  # 这个时候,对应对一个模块
            x2 = self.short(x1)  # 这个时候输入的是 x1
        else:
            x2 = self.short(x)  # 这个对应上面网络图第三个, 用的输入 x 
            
        x1 = self.conv1(x1)
        x1 = self.conv2(x1)
        x1 = self.conv3(x1)
        
        x = x1 + x2  # 合并
        return x
    
# 堆积
class Stack2(nn.Module):
    def __init__(self, in_channel, filters, blocks, stride=2):  # blocks代表上图中最左网络图,残差堆积 中 层数
        super().__init__()
        self.conv = nn.Sequential()
        # 上面网络图中,最左部分,残差堆积是很相似的
        self.conv.add_module(str(0), Block2(in_channel, filters, conv_shortcut=True))   # 参数,名字 + 模块
        # 中间层
        for i in range(1, blocks - 1):  # 上面一层去除,中间剩下 blocks - 2
            self.add_module(str(i), Block2(4 * filters, filters))  # 上一层输出:4 * filters,这一层回归filters
        self.conv.add_module(str(blocks-1), Block2(4 * filters, filters, stride=stride))  # 这里的stride不一样
        
    def forward(self, x):
        x = self.conv(x)
        
        return x
    
class ResNet50V2(nn.Module):
    def __init__(self,
                 include_top=True, # 是否需要包含最定层
                 preact=True,  # 是否需要预激活
                 use_bias=True,  # 卷积层是否用偏置
                 input_shape=[224, 224, 3],
                 classes=1000,  # 类别数量
                 pooling=None
                 ):
        super().__init__()
        
        # 上图神经网络,最左边,最顶层, ZeroPad是感受野参数
        self.conv1 = nn.Sequential() 
        self.conv1.add_module('conv', nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=use_bias))  
        # 这里的标准化,激活函数是可选的
        if not preact:
            self.conv1.add_module('bn', nn.BatchNorm2d(64))
            self.conv1.add_module('relu', nn.ReLU())
        self.conv1.add_module('max_pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
        
        # 上图神经网络,最左边,中间层
        self.conv2 = Stack2(64, 64, 3)
        self.conv3 = Stack2(256, 128, 4)
        self.conv4 = Stack2(512, 256, 6)
        self.conv5 = Stack2(1024, 512, 3, stride=1)  # 这些层数量变换挺有意思的
        
        self.last = nn.Sequential()
        if preact:
            self.last.add_module('bn', nn.BatchNorm2d(2048))
            self.last.add_module('relu', nn.ReLU(True))
        if include_top:
            self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
            self.last.add_module('flatten', nn.Flatten())
            self.last.add_module('fc', nn.Linear(2048, classes))
        else:
            if pooling=='avg':
                self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
            elif pooling=='max':
                self.last.add_module('max_pool', nn.AdaptiveAMaxPool2d((1, 1)))
        
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = self.conv5(x)
        x = self.last(x)
        return x
    
model1 = ResNet50V2(classes=len(classnames)).to(device)

model1
ResNet50V2(
  (conv1): Sequential(
    (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
    (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  )
  (conv2): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (conv3): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    )
    (2): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (conv4): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      )
      (5): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (2): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (3): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (4): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (conv5): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Identity()
        (conv1): Sequential(
          (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (last): Sequential(
    (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))
    (flatten): Flatten(start_dim=1, end_dim=-1)
    (fc): Linear(in_features=2048, out_features=2, bias=True)
  )
)

3、搭建SE + ResNetV2

这里只加了一层,在卷积展开为全连接层前,具体看代码,但是想要实验效果,还需要查阅相关论文,进行相关实验。

'''  
conv_shortcut: 采用什么样的残差连接,对应上面图的1、3模块
filters: 输出通道数
卷积核:默认为3
'''
class Block2(nn.Module):
    def __init__(self, in_channel, filters, kernel_size=3, stride=1, conv_shortcut=False):
        super().__init__()
        
        # 第一个,preact,对应上图的前两层,bn、relu
        self.preact = nn.Sequential(
            nn.BatchNorm2d(in_channel),
            nn.ReLU(True)
        )
        
        # 判断是否需要使用残差连接,上图展示的网络中,有3个模块,有两个有残差连接,有一个没有,没有的那一块卷积核为 1
        self.shortcut = conv_shortcut
        if self.shortcut:   # 对应上图的第一块网络结构残差连接
            self.short = nn.Conv2d(in_channel, 4 * filters, kernel_size=1, stride=stride, padding=0, bias=False)  # padding默认为0, 4 * filtersz看源码得出,  输出通道
        else:
            self.short = nn.MaxPool2d(kernel_size=1, stride=stride, padding=0) if stride > 1 else nn.Identity()  # nn.Identity() 对输入的数据X,不做任何操作
        
        # 后面结果,三个模块都一样,我把他分层三个模块
        # 模块一,看源码
        self.conv1 = nn.Sequential(
            nn.Conv2d(in_channel, filters, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(filters),
            nn.ReLU(True)
        )
        
        # 模块二
        self.conv2 = nn.Sequential(
            nn.Conv2d(filters, filters, kernel_size=kernel_size, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(filters),
            nn.ReLU(True)
        )
        
        # 模块三
        self.conv3 = nn.Conv2d(filters, 4 * filters, kernel_size=1, stride=1)
        
    def forward(self, x):
        # 数据
        x1 = self.preact(x)
        if self.shortcut:  # 这个时候,对应对一个模块
            x2 = self.short(x1)  # 这个时候输入的是 x1
        else:
            x2 = self.short(x)  # 这个对应上面网络图第三个, 用的输入 x 
            
        x1 = self.conv1(x1)
        x1 = self.conv2(x1)
        x1 = self.conv3(x1)
        
        x = x1 + x2  # 合并
        return x
    
# 堆积
class Stack2(nn.Module):
    def __init__(self, in_channel, filters, blocks, stride=2):  # blocks代表上图中最左网络图,残差堆积 中 层数
        super().__init__()
        self.conv = nn.Sequential()
        # 上面网络图中,最左部分,残差堆积是很相似的
        self.conv.add_module(str(0), Block2(in_channel, filters, conv_shortcut=True))   # 参数,名字 + 模块
        # 中间层
        for i in range(1, blocks - 1):  # 上面一层去除,中间剩下 blocks - 2
            self.add_module(str(i), Block2(4 * filters, filters))  # 上一层输出:4 * filters,这一层回归filters
        self.conv.add_module(str(blocks-1), Block2(4 * filters, filters, stride=stride))  # 这里的stride不一样
        
    def forward(self, x):
        x = self.conv(x)
        
        return x
    
class ResNet50V2(nn.Module):
    def __init__(self,
                 include_top=True, # 是否需要包含最定层
                 preact=True,  # 是否需要预激活
                 use_bias=True,  # 卷积层是否用偏置
                 input_shape=[224, 224, 3],
                 classes=1000,  # 类别数量
                 pooling=None
                 ):
        super().__init__()
        
        # 上图神经网络,最左边,最顶层, ZeroPad是感受野参数
        self.conv1 = nn.Sequential() 
        self.conv1.add_module('conv', nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=use_bias))  
        # 这里的标准化,激活函数是可选的
        if not preact:
            self.conv1.add_module('bn', nn.BatchNorm2d(64))
            self.conv1.add_module('relu', nn.ReLU())
        self.conv1.add_module('max_pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
        
        # 上图神经网络,最左边,中间层
        self.conv2 = Stack2(64, 64, 3)
        self.conv3 = Stack2(256, 128, 4)
        self.conv4 = Stack2(512, 256, 6)
        
        # ------------------ 添加注意力机制  
        self.se = SE(1024, 16)
        
        self.conv5 = Stack2(1024, 512, 3, stride=1)  # 这些层数量变换挺有意思的
        
        
        self.last = nn.Sequential()
        if preact:
            self.last.add_module('bn', nn.BatchNorm2d(2048))
            self.last.add_module('relu', nn.ReLU(True))
        if include_top:
            self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
            self.last.add_module('flatten', nn.Flatten())
            self.last.add_module('fc', nn.Linear(2048, classes))
        else:
            if pooling=='avg':
                self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
            elif pooling=='max':
                self.last.add_module('max_pool', nn.AdaptiveAMaxPool2d((1, 1)))
        
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        
        # 注意力机制
        x = self.se(x)
        
        x = self.conv5(x)
        x = self.last(x)
        return x
    
model2 = ResNet50V2(classes=len(classnames)).to(device)

model2
ResNet50V2(
  (conv1): Sequential(
    (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
    (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  )
  (conv2): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (conv3): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    )
    (2): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (conv4): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      )
      (5): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
        (conv1): Sequential(
          (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (2): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (3): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
    (4): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (se): SE(
    (squeeze): AdaptiveAvgPool2d(output_size=(1, 1))
    (compress): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
    (excitation): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
  )
  (conv5): Stack2(
    (conv): Sequential(
      (0): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (conv1): Sequential(
          (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): Block2(
        (preact): Sequential(
          (0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): ReLU(inplace=True)
        )
        (short): Identity()
        (conv1): Sequential(
          (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv2): Sequential(
          (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
        )
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (1): Block2(
      (preact): Sequential(
        (0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
      )
      (short): Identity()
      (conv1): Sequential(
        (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (last): Sequential(
    (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))
    (flatten): Flatten(start_dim=1, end_dim=-1)
    (fc): Linear(in_features=2048, out_features=2, bias=True)
  )
)

model2(torch.randn(32, 3, 224, 224).to(device)).shape

torch.Size([32, 2])

3、模型训练

1、构建训练集

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    batch_size = len(dataloader)
    
    train_acc, train_loss = 0, 0 
    
    for X, y in dataloader:
        X, y = X.to(device), y.to(device)
        
        # 训练
        pred = model(X)
        loss = loss_fn(pred, y)
        
        # 梯度下降法
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        # 记录
        train_loss += loss.item()
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        
    train_acc /= size
    train_loss /= batch_size
    
    return train_acc, train_loss

2、构建测试集

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    batch_size = len(dataloader)
    
    test_acc, test_loss = 0, 0 
    
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
        
            pred = model(X)
            loss = loss_fn(pred, y)
        
            test_loss += loss.item()
            test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        
    test_acc /= size
    test_loss /= batch_size
    
    return test_acc, test_loss

3、设置超参数

loss_fn = nn.CrossEntropyLoss()  # 损失函数     
learn_lr = 1e-4             # 超参数
optimizer1 = torch.optim.Adam(model1.parameters(), lr=learn_lr)   # 优化器
optimizer2 = torch.optim.Adam(model2.parameters(), lr=learn_lr)   # 优化器

4、模型训练

1、ResNet50V2

train_acc1 = []
train_loss1 = []
test_acc1 = []
test_loss1 = []

epoches = 10

for i in range(epoches):
    model1.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model1, loss_fn, optimizer1)
    
    model1.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model1, loss_fn)
    
    train_acc1.append(epoch_train_acc)
    train_loss1.append(epoch_train_loss)
    test_acc1.append(epoch_test_acc)
    test_loss1.append(epoch_test_loss)
    
    # 输出
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')
    print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
    
print("Done")

Epoch: 1, Train_acc:65.7%, Train_loss:0.634, Test_acc:67.4%, Test_loss:0.631
Epoch: 2, Train_acc:71.6%, Train_loss:0.546, Test_acc:73.4%, Test_loss:0.552
Epoch: 3, Train_acc:76.2%, Train_loss:0.480, Test_acc:72.7%, Test_loss:0.589
Epoch: 4, Train_acc:81.0%, Train_loss:0.434, Test_acc:76.9%, Test_loss:0.499
Epoch: 5, Train_acc:84.7%, Train_loss:0.342, Test_acc:82.1%, Test_loss:0.550
Epoch: 6, Train_acc:87.6%, Train_loss:0.306, Test_acc:80.2%, Test_loss:0.430
Epoch: 7, Train_acc:89.7%, Train_loss:0.232, Test_acc:83.0%, Test_loss:0.425
Epoch: 8, Train_acc:92.0%, Train_loss:0.189, Test_acc:81.1%, Test_loss:0.489
Epoch: 9, Train_acc:93.2%, Train_loss:0.181, Test_acc:80.7%, Test_loss:0.494
Epoch:10, Train_acc:95.6%, Train_loss:0.124, Test_acc:83.4%, Test_loss:0.550
Done

2、SE + ResNet50V2

train_acc2 = []
train_loss2 = []
test_acc2 = []
test_loss2 = []

epoches = 10

for i in range(epoches):
    model2.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model2, loss_fn, optimizer2)
    
    model2.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model2, loss_fn)
    
    train_acc2.append(epoch_train_acc)
    train_loss2.append(epoch_train_loss)
    test_acc2.append(epoch_test_acc)
    test_loss2.append(epoch_test_loss)
    
    # 输出
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')
    print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
    
print("Done")

Epoch: 1, Train_acc:65.1%, Train_loss:0.641, Test_acc:66.2%, Test_loss:0.677
Epoch: 2, Train_acc:71.3%, Train_loss:0.563, Test_acc:73.2%, Test_loss:0.558
Epoch: 3, Train_acc:76.2%, Train_loss:0.469, Test_acc:76.5%, Test_loss:0.532
Epoch: 4, Train_acc:80.7%, Train_loss:0.418, Test_acc:76.7%, Test_loss:0.634
Epoch: 5, Train_acc:84.1%, Train_loss:0.355, Test_acc:83.0%, Test_loss:0.444
Epoch: 6, Train_acc:87.7%, Train_loss:0.299, Test_acc:78.6%, Test_loss:0.614
Epoch: 7, Train_acc:92.6%, Train_loss:0.195, Test_acc:81.6%, Test_loss:0.578
Epoch: 8, Train_acc:92.9%, Train_loss:0.174, Test_acc:81.1%, Test_loss:0.610
Epoch: 9, Train_acc:93.5%, Train_loss:0.156, Test_acc:83.2%, Test_loss:0.468
Epoch:10, Train_acc:95.7%, Train_loss:0.121, Test_acc:84.1%, Test_loss:0.509
Done

5、结果可视化

import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息

epochs_range = range(epoches)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc1, label='Training Accuracy')
plt.plot(epochs_range, test_acc1, label='Test Accuracy')
plt.plot(epochs_range, train_acc2, label='Training_SE Accuracy')
plt.plot(epochs_range, test_acc2, label='Test_SE Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss1, label='Training Loss')
plt.plot(epochs_range, test_loss1, label='Test Loss')
plt.plot(epochs_range, train_loss2, label='Training_SE Loss')
plt.plot(epochs_range, test_loss2, label='Test_SE Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()


在这里插入图片描述

从准确率来看,加了注意力机制的稍好一点,想要更进一步探究,还需要跑多几次,多几次实验

相关文章:

  • 蓝桥杯省模拟赛 互质的数
  • HCIP(VLAN综合实验)
  • 安装Ollama,本地部署deepseek
  • 如何去评估一个系统的高可用
  • 流程引擎/状态机简介以及选型
  • Centos7安装cat美化工具lolcat
  • 使用 flutter_blue_plus 连接蓝牙
  • 3月28号
  • Redis:Hash 类型 内部实现、命令及应用场景
  • 51c嵌入式~MOS~合集1
  • 计算机网络基础:网络流量工程与优化策略
  • Three.js 快速入门教程【二十】3D模型加载优化实战:使用gltf-pipeline与Draco对模型进行压缩,提高加载速度和流畅性
  • Kafka 偏移量
  • python-59-基于python内置库解析html获取标签关键信息
  • python项目整体文件和依赖打包
  • ​Flink/Kafka在python中的用处
  • 局域网共享失败?打印机/文件夹共享工具
  • 机器学习中的数学知识
  • 【 C 语言实现顺序表的基本操作】(数据结构)
  • 使用matlab进行分位数回归
  • 哪种“网红减肥法”比较靠谱?医学专家和运动专家共同解答
  • 国务院关税税则委:调整对原产于美国的进口商品加征关税措施
  • 上海北外滩,未来五年将如何“长个子”“壮筋骨”?
  • 时隔4年多,这一次普京和泽连斯基能见面吗?
  • 做街坊们的“健康管家”,她把专科护理服务送上门
  • 水豚出逃40天至今未归,江苏扬州一动物园发悬赏公告