当前位置: 首页 > news >正文

面向AI研究的模块化即插即用架构综述与资源整理全覆盖

在当前深度学习研究日益激烈的背景下,如何高效提升论文创新性与实验迭代效率已成为科研人员关注的核心问题。近年来,“即插即用模块”(Plug-and-Play Modules)因其“无缝集成”、“快速启用”的特性,逐渐成为论文中常见的性能增强组件。此类模块通常具备良好的模块化设计和标准化接口,能够灵活适配多种网络架构与任务类型,显著降低模型改进的技术门槛与开发成本。本文围绕当前主流的即插即用模块进行系统梳理,涵盖2025年最新发表于顶会顶刊中的成果,覆盖计算机视觉(CV)、图像处理及其他AI相关任务。所有模块均提供可复现代码,并按功能分类整理,用户可直接“拖进项目即用”,实现快速验证与性能提升,助力科研高效落地。

即插即用模块:https://github.com/ai-dawang/PlugNPlay-Modules?tab=readme-ov-file

1. 注意力机制

(1) MCA: Multidimensional collaborative attention in deep convolutio​(MCA:用于图像识别的深度卷积神经网络中的多维协同注意力)——一种多维协作注意力
论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0952197623012630
代码地址:https://www.sciencedirect.com/science/article/abs/pii/S0952197623012630

(2) MCANet: Medical Image Segmentation withMulti-Scale Cross-Axis Attention​(MCANet:具有多尺度交叉轴注意力的医学图像分割)——基本适用于CV所有领域

论文地址:https://arxiv.org/pdf/2312.08866v1
代码地址:https://github.com/haoshao-nku/medical_seg

(3)Recursive Generalization Transformer for Image Super-Resolution​(用于图像超分辨率的递归泛化转换器)——RG_SA(递归泛化自注意力)-应用于CV2d任务

论文地址:https://arxiv.org/abs/2303.06373
代码地址:https://github.com/zhengchen1999/RGT

(4) Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models​(文本到图像扩散模型中贝叶斯上下文更新的基于能量的交叉注意力)——来自扩散生成领域

论文地址:https://arxiv.org/abs/2306.09869
代码地址:https://github.com/EnergyAttention/Energy-Based-CrossAttention

(5) Fast Vision Transformers with HiLo Attention​(具有 HiLo Attention 的 Fast Vision 变压器)——结合图像高频低频的即插即用注意力,应用于CV2d领域

论文地址:https://arxiv.org/abs/2205.13213
代码地址:https://github.com/ziplab/LITv2

(6) HCF-Net: Hierarchical Context Fusion Network for Infrared Small Object Detection(HCF-Net: 用于红外小目标检测的分层上下文融合网络)——PPA-Attention(并行化贴片感知注意力)-CV2d任务通用

论文地址:https://arxiv.org/abs/2403.10778
代码地址:https://github.com/zhengshuchen/HCFNet

(7) AGCA: An Adaptive Graph Channel Attention Module for Steel Surface Defect Detection(AGCA:用于钢材表面缺陷检测的自适应图形通道注意力模块)——可用于CV2维图像和图卷积网络

论文地址:https://ieeexplore.ieee.org/document/10050536
代码地址:https://github.com/C1nDeRainBo0M/AGCA

(8) Relation-Aware Global Attention for Person Re-identification(用于人员重新识别的关系感知全局关注)——RGA(关系感知全局注意力)-来自行人重识别领域

论文地址:https://arxiv.org/abs/1904.02998
代码地址:https://github.com/microsoft/Relation-Aware-Global-Attention-Networks

(9) Edge-Enhanced GCIFFNet: A Multiclass Semantic Segmentation Network Based on Edge Enhancement and Multiscale Attention Mechanism(边缘增强 GCIFFNet:一种基于边缘增强和多尺度注意力机制的多类语义分割网络)——EGA(边缘引导注意力),来于边缘检测任务,即插即用于CV2维图像任务

论文地址:https://ieeexplore.ieee.org/document/10412635

(10) Agent Attention: On the Integration of Softmax and Linear Attention(Agent Attention:关于 Softmax 和 Linear Attention 的集成)——Agent-Attention(全新注意力范式)-ECCV2024!CV2维图像任务通用!

论文地址:https://arxiv.org/abs/2312.08874
代码地址:https://github.com/LeapLabTHU/Agent-Attention

(11) Squeeze-and-Excitation Networks(挤压激励网络)——SENet的3D版本和PE模块,即插即用3D注意力模块

论文地址:https://arxiv.org/abs/1709.01507
代码地址:https://github.com/miraclewkf/SENet-PyTorch

(12) ULSAM: Ultra-Lightweight Subspace Attention Module for Compact Convolutional Neural Networks(ULSAM:用于紧凑型卷积神经网络的超轻量级子空间注意力模块)——ULSAM(WACV2020):一种空间注意力模块,即插即用

论文地址:https://arxiv.org/abs/2006.15102
代码地址:https://github.com/Nandan91/ULSAM

(13) Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks(在完全卷积网络中并发空间和通道压缩和激励)——GCT(CVPR2020):改进通道注意力SENet,即插即用模块

论文地址:https://arxiv.org/abs/1803.02579

(14) DICAM promotes TH17 lymphocyte trafficking across the blood-brain barrier during autoimmune neuroinflammation(DICAM 推广 TH17 自身免疫性神经炎症期间跨血脑屏障的淋巴细胞运输)——DICAM:适用于水下图像增强的注意力模块

论文地址:https://pubmed.ncbi.nlm.nih.gov/34985970/

(15) AAU-net: An Adaptive Attention U-net for Breast Lesions Segmentation in Ultrasound Images(AAU-net: 一种用于超声图像中乳腺病变分割的自适应注意力 U-net)——HAAM:混合自适应注意力模块,适用于图像分割任务

论文地址:https://arxiv.org/abs/2204.12077
代码地址:https://github.com/CGPxy/AAU-net

(16) Half Wavelet Attention on M-Net+ for Low-Light Image Enhancement(M-Net+ 上的半小波注意力用于低光图像增强)——可用于低光图像增强任务

论文地址:https://arxiv.org/abs/2203.01296
代码地址:https://github.com/FanChiMao/HWMNet

(17) Unsupervised Bidirectional Contrastive Reconstruction and Adaptive Fine-Grained Channel Attention Networks for image dehazing(用于图像去雾的无监督双向对比重建和自适应细粒度通道注意力网络)——改进SE通道注意力

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0893608024002387
代码地址:https://github.com/Lose-Code/UBRFC-Net

(18) SCSA: Exploring the Synergistic Effects Between Spatial and Channel Attention(SCSA:探索空间注意力和渠道注意力之间的协同效应)

论文地址:https://arxiv.org/abs/2407.05128
代码地址:https://github.com/HZAI-ZJNU/SCSA

(19) Perspective+ Unet: Enhancing Segmentation with Bi-Path Fusion and Efficient Non-Local Attention for Superior Receptive Fields(Perspective+ Unet:通过双路径融合和高效的非局部注意力增强分割,以实现卓越的感受野)——MICCAI 2024 | 高效非局部注意力ENLTB

论文地址:https://arxiv.org/abs/2406.14052
代码地址:https://github.com/tljxyys/Perspective-Unet

(20) Demystify Mamba in Vision: A Linear Attention Perspective(揭开 Mamba 在视觉中的神秘面纱:线性注意力视角)——继承Mamba优势 | 线性注意力模块 | 计算机视觉任务通用

论文地址:https://arxiv.org/abs/2405.16605
代码地址:https://github.com/LeapLabTHU/MLLA

(21) LDConv: Linear deformable convolution for improving convolutional neural networks(LDConv:用于改进卷积神经网络的线性可变形卷积)——线性可变形卷积LDConv | 视觉任务通用

论文地址:https://arxiv.org/abs/2311.11587
代码地址:https://github.com/CV-ZhangXin/LDConv

(22) SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications(SwiftFormer:基于变压器的实时移动视觉应用的高效加法注意力)

论文地址:https://arxiv.org/abs/2303.15446
代码地址:https://github.com/Amshaker/SwiftFormer

(23) A dual encoder crack segmentation network with Haar wavelet-based high–low frequency attention(一种基于 Haar 小波的高低频注意力的双编码器裂纹分割网络)——深度学习裂缝检测 | 基于Haar小波的注意力

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0957417424018177

(24) LGAG-Net: Lesion-Guided Adaptive Graph Network for Bone Abnormality Detection From Musculoskeletal Radiograph(LGAG-Net:用于从肌肉骨骼 X 光片中检测骨异常的病变引导自适应图形网络)——CVPR2024大核分组注意力门控模块LGAG适用于医学图像分割任务的即插即用模块

论文地址:https://ieeexplore.ieee.org/abstract/document/10371282

(25) CSAM: A 2.5D Cross-Slice Attention Module for Anisotropic Volumetric Medical Image Segmentation(CSAM:用于各向异性体积医学图像分割的 2.5D 交叉切片注意力模块)——医学图像分割任务 | WACV2024顶会 | 交叉切片注意力,适用于所有CV方向任务

论文地址:https://arxiv.org/abs/2311.04942
代码地址:https://github.com/aL3x-O-o-Hung/CSAM

(26) TransNeXt: Robust Foveal Visual Perception for Vision Transformers(TransNeXt:用于视觉变压器的稳健中心凹视觉感知)——CVPR 2024顶会 | CGLU卷积门控通道注意力即插即用模块,适用于CV和NLP任务通用注意力模块

论文地址:https://arxiv.org/abs/2311.17132
代码地址:https://github.com/DaiShiResearch/TransNeXt

(27) Multi-scale Attention Network for Single Image Super-Resolution(用于单图像超分辨率的多尺度注意力网络)——CVPR 2024顶会 | MLKA多尺度大核注意力模块,适用于所有CV2维任务

论文地址:https://arxiv.org/abs/2209.14145
代码地址:https://github.com/icandle/MAN

(28) Vision Transformer with Deformable Attention(具有可变形注意力的 Vision Transformer)——可用于时间序列预测任务的可变形注意力模块Deformable Attention

论文地址:https://arxiv.org/abs/2201.00520
代码地址:https://github.com/LeapLabTHU/DAT

(29) FECAM: Frequency Enhanced Channel Attention Mechanism for Time Series Forecasting(FECAM:用于时间序列预测的频率增强通道注意力机制)——用于时间序列预测的频率增强信道注意力机制(dct_channel_block)

论文地址:https://arxiv.org/abs/2212.01209
代码地址:https://github.com/Zero-coder/FECAM

(30) DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting(DSANet:用于多变量时间序列预测的双自注意力网络)——去稳态注意力-用于时间预测序列的即插即用注意力

论文地址:https://dl.acm.org/doi/10.1145/3357384.3358132
代码地址:https://github.com/bighuang624/DSANet

(31) Interpretable local flow attention for multi-step traffic flow prediction(用于多步骤流量预测的可解释本地流关注)——局部流注意力-用于交通流预测的即插即用模块

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0893608023000230?via%3Dihub
代码地址:https://github.com/hub5/LFAConvLSTM

(32) DCT-Former: Efficient Self-Attention with Discrete Cosine Transform(DCT-Former: 具有离散余弦变换的高效自注意力)——用于时间序列预测任务,适用于NLP方向的即插即用注意力模块

论文地址:https://arxiv.org/abs/2203.01178
代码地址:https://github.com/cscribano/DCT-Former-Public

(33) MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network(MotionAGFormer:使用 Transformer-GCNFormer 网络增强 3D 人体姿势估计)——关键点检测任务 | WACV2024顶会 | AGF注意力即插即用模块,适用于3D人体关键点检测任务

论文地址:https://arxiv.org/abs/2310.16288
代码地址:https://github.com/TaatiTeam/MotionAGFormer

(34) Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising(重新思考基于 Transformer 的盲点网络进行自监督图像去噪)——【AAAI 2025】

论文地址:https://arxiv.org/abs/2404.07846
代码地址:https://github.com/nagejacob/TBSN

(35) Unsupervised Bidirectional Contrastive Reconstruction and Adaptive Fine-Grained Channel Attention Networks for image dehazing(用于图像去雾的无监督双向对比重建和自适应细粒度通道注意力网络)

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0893608024002387?via%3Dihub
代码地址:https://github.com/Lose-Code/UBRFC-Net

(36) RMT: Retentive Networks Meet Vision Transformers(RMT: 保留网络与视觉变换器的结合)——【CVPR 2024】

论文地址:https://arxiv.org/abs/2309.11523
代码地址:https://github.com/qhfan/RMT

(37) CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution(CATANet:用于轻量级图像超分辨率的高效内容感知Token聚合)——【CVPR 2025】

论文地址:https://arxiv.org/abs/2503.06896
代码地址:https://github.com/EquationWalker/CATANet/tree/main

(38) FSTA-SNN:Frequency-based Spatial-Temporal Attention Module for Spiking Neural Networks(FSTA-SNN:基于频率的时空注意力模块,用于脉冲神经网络)——【AAAI 2025】

论文地址:https://arxiv.org/abs/2501.14744
代码地址:https://github.com/yukairong/FSTA-SNN

(39) High-Similarity-Pass Attention for Single Image Super-Resolution(高相似度通道注意力用于单图像超分辨率)——【TIP 2024】

论文地址:https://arxiv.org/abs/2305.15768
代码地址:https://github.com/laoyangui/HSPAN

2. 归一化

(1) BCN: Batch Channel Normalization for Image Classification(BCN:用于图像分类的批量通道归一化)

论文地址:https://arxiv.org/abs/2312.00596
代码地址:https://github.com/AfifaKhaled/Batch-Channel-Normalization

(2) Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks(Lipschitz 自注意力层归一化及其在图神经网络中的应用)——Lipschitz归一化-应用于GAT和Graph Transformer的即插即用模块

论文地址:https://arxiv.org/abs/2103.04886
代码地址:https://github.com/gdasoulas/LipschitzNorm

(3) SelfNorm and CrossNorm for Out-of-Distribution Robustness(用于分布外稳健性的 SelfNorm 和 CrossNorm)——Crossnorm-Selfnorm-两种归一化方式

论文地址:https://arxiv.org/abs/2102.02811v1

(4) CrossNorm and SelfNorm for Generalization under Distribution Shifts(分布偏移下泛化的 CrossNorm 和 SelfNorm)

论文地址:https://arxiv.org/abs/2102.02811
代码地址:https://github.com/amazon-science/crossnorm-selfnorm

(5) ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond(ContraNorm:关于过度平滑和超越的对比学习视角)——ContraNorm(对比归一化层)-可以轻松集成到GNN和Transformer

论文地址:https://arxiv.org/abs/2303.06562
代码地址:https://github.com/PKU-ML/ContraNorm

3. 时序

(1) FITS: Modeling Time Series with 10k Parameters(FITS:时间序列建模参数)——FITS(从频域角度出发)-用于时间序列任务的即插即用模块

论文地址:https://arxiv.org/abs/2307.03756
代码地址:https://github.com/VEWOXIC/FITS

(2) TSLANet: Rethinking Transformers for Time Series Representation Learning——TSLANet: 重新思考用于时间序列表示学习的Transformers

论文地址:https://arxiv.org/abs/2404.08472
代码地址:https://github.com/emadeldeen24/TSLANet

(3) MSGNet: Learning Multi-Scale Inter-Series Correlations for Multivariate Time Series Forecasting(MSGNet:学习多变量时间序列预测的多尺度序列间相关性)——时间序列预测 | AAAI 2024

论文地址:https://arxiv.org/abs/2401.00423
代码地址:https://github.com/YoZhibo/MSGNet?tab=readme-ov-file

(4) A Time Series is Worth 64 Words: Long-term Forecasting with Transformers(一个时间序列胜过 64 个单词:使用 Transformers 进行长期预测)——NLP时间序列预测任务 | ICLR 2023顶会 | 补丁时间序列预测PatchTST即插即用模块,NLP方向通用模块

论文地址:https://arxiv.org/abs/2211.14730
代码地址:https://github.com/yuqinie98/patchtst

4. CV全流程任务

(1) RFAConv: Innovating Spatial Attention and Standard Convolutional Operation(RFAConv:创新空间注意力和标准卷积运算)——基本适用于分类,目标检测,分割等所有CV任务

论文地址:https://arxiv.org/abs/2304.03198
代码地址:https://github.com/Liuchen1997/RFAConv

(2) Salient Positions based Attention Network for Image Classification(基于 Ssignt Positions 的图像分类注意力网络)——SPABlock-显著位置选择模块,适用于CV的即插即用模块,非卷积,也非注意力

论文地址:https://arxiv.org/abs/2106.04996
代码地址:https://github.com/likyoo/SPANet

(3) SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications(SwiftFormer:基于变压器的实时移动视觉应用的高效加法注意力)——ICCV 2023 | 轻量高效编码器 | 视觉任务通用

论文地址:https://arxiv.org/abs/2303.15446
代码地址:https://github.com/Amshaker/SwiftFormer

5. CV二维

(1) CoordGate: Efficiently Computing Spatially-Varying Convolutions in Convolutional Neural Networks(CoordGate:在卷积神经网络中高效计算空间变化的卷积)——CoordGate-非卷积!非注意力的即插即用模块,能够根据输入图像的特定特征动态调整权重,CV二维图像任务通用!

论文地址:https://arxiv.org/abs/2401.04680

import torch
import torch.nn as nn# 论文:CoordGate: Efficiently Computing Spatially-Varying Convolutions in Convolutional Neural Networks
# 论文地址:https://arxiv.org/pdf/2401.04680v1class CoordGate(nn.Module):def __init__(self, enc_channels, out_channels, size: list = [256, 256], enctype='pos', **kwargs):super(CoordGate, self).__init__()'''type can be:'pos' - position encoding'regularised' '''self.enctype = enctypeself.enc_channels = enc_channelsif enctype == 'pos':encoding_layers = kwargs['encoding_layers']x_coord, y_coord = torch.linspace(-1, 1, int(size[0])), torch.linspace(-1, 1, int(size[1]))self.register_buffer('pos', torch.stack(torch.meshgrid((x_coord, y_coord), indexing='ij'), dim=-1).view(-1,2))  # .to(device)self.encoder = nn.Sequential()for i in range(encoding_layers):if i == 0:self.encoder.add_module('linear' + str(i), nn.Linear(2, enc_channels))else:self.encoder.add_module('linear' + str(i), nn.Linear(enc_channels, enc_channels))elif (enctype == 'map') or (enctype == 'bilinear'):initialiser = kwargs['initialiser']if 'downsample' in kwargs.keys():self.sample = kwargs['downsample']else:self.sample = [1, 1]self.map = nn.Parameter(initialiser)self.conv = nn.Conv2d(enc_channels, out_channels, 1, padding='same')self.relu = nn.ReLU()def forward(self, x):'''x is (bs,nc,nx,ny)'''if self.enctype == 'pos':gate = self.encoder(self.pos).view(1, x.shape[2], x.shape[3], x.shape[1]).permute(0, 3, 1, 2)gate = torch.nn.functional.relu(gate)  # ?x = self.conv(x * gate)return xelif self.enctype == 'map':map = self.relu(self.map).repeat_interleave(self.sample[0], dim=2).repeat_interleave(self.sample[1], dim=3)x = self.conv(x * map)return xelif self.enctype == 'bilinear':# if self.enc_channels == 9:map = create_bilinear_coeff_map_cart_3x3(self.map[:, 0:1], self.map[:, 1:2])# else:#     map = create_bilinear_coeff_map_cart_5x5(angles,distances)map = self.relu(map).repeat_interleave(self.sample[0], dim=2).repeat_interleave(self.sample[1], dim=3)x = self.conv(x * map)return xdef create_bilinear_coeff_map_cart_3x3(x_disp, y_disp):shape = x_disp.shapex_disp = x_disp.reshape(-1)y_disp = y_disp.reshape(-1)# Determine the quadrant based on the signs of the displacementsprimary_indices = torch.zeros_like(x_disp, dtype=torch.long)primary_indices[(x_disp >= 0) & (y_disp >= 0)] = 0  # Quadrant 1primary_indices[(x_disp < 0) & (y_disp >= 0)] = 2  # Quadrant 2primary_indices[(x_disp < 0) & (y_disp < 0)] = 4  # Quadrant 3primary_indices[(x_disp >= 0) & (y_disp < 0)] = 6  # Quadrant 4# Define the number of directionsnum_directions = 8# Compute the indices for the primary and secondary directionssecondary_indices = ((primary_indices + 1) % num_directions).long()tertiary_indices = (primary_indices - 1).long()tertiary_indices[tertiary_indices < 0] = num_directions - 1x_disp = x_disp.abs()y_disp = y_disp.abs()coeffs = torch.zeros((x_disp.size(0), num_directions + 1), device=x_disp.device)batch_indices = torch.arange(x_disp.size(0), device=x_disp.device)coeffs[batch_indices, primary_indices] = (x_disp * y_disp)coeffs[batch_indices, secondary_indices] = x_disp * (1 - y_disp)coeffs[batch_indices, tertiary_indices] = (1 - x_disp) * y_dispcoeffs[batch_indices, -1] = (1 - x_disp) * (1 - y_disp)swappers = (primary_indices == 0) | (primary_indices == 4)coeffs[batch_indices[swappers], secondary_indices[swappers]] = (1 - x_disp[swappers]) * y_disp[swappers]coeffs[batch_indices[swappers], tertiary_indices[swappers]] = x_disp[swappers] * (1 - y_disp[swappers])coeffs = coeffs.view(shape[0], shape[2], shape[3], num_directions + 1).permute(0, 3, 1, 2)reorderer = [0, 1, 2, 7, 8, 3, 6, 5, 4]return coeffs[:, reorderer, :, :]if __name__ == '__main__':# 创建 CoordGate 模块的实例enc_channels = 32out_channels = 32size = [256, 256]enctype = 'pos'encoding_layers = 2initialiser = torch.rand((out_channels, 2))kwargs = {'encoding_layers': encoding_layers, 'initialiser': initialiser}block = CoordGate(enc_channels, out_channels, size, enctype, **kwargs)# 生成随机输入数据input_size = (1, enc_channels, size[0], size[1])input_data = torch.rand(input_size)# 对输入数据进行前向传播output = block(input_data)# 打印输入和输出数据的形状print("Input size:", input_data.size())print("Output size:", output.size())

(2) Efficient Multi-Scale Attention Module with Cross-Spatial Learning(具有跨空间学习的高效多尺度注意力模块)——ESAM-增强边缘信息的即插即用模块,CV2维任务通用

论文地址:https://arxiv.org/abs/2305.13563
代码地址:https://github.com/YOLOonMe/EMA-attention-module

(3) Context-Aware Crowd Counting(情境感知人群计数)——CAN(上下文感知模块)-来自于人群计数任务,CV2维图像通用!

论文地址:https://arxiv.org/abs/1811.10452
代码地址:https://github.com/weizheliu/Context-Aware-Crowd-Counting?tab=readme-ov-file

(4)Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection(用于异常检测的自监督预测卷积注意力块)——SSPCAB-来自图像和视频异常检测领域的即插即用模块,CV2维任务通用

论文地址:https://arxiv.org/abs/2111.09099
代码地址:https://github.com/ristea/sspcab

(5) Dynamic Filter Networks(动态过滤网络)——-通过频域滤波和动态调整滤波器权重对图像进行处理,CV2维度图像通用!

论文地址:https://arxiv.org/abs/1605.09673
代码地址:https://github.com/dbbert/dfn?tab=readme-ov-file

6.点云

(1) Adaptive Graph Convolution for Point Cloud Analysis(用于点云分析的自适应图卷积)——适用于点云分类和分割

论文地址:https://arxiv.org/abs/2108.08035
代码地址:https://github.com/hrzhou2/AdaptConv-master

(2) GeoConv: Geodesic Guided Convolution for Facial Action Unit Recognition(GeoConv:用于面部动作单元识别的测地线引导卷积)——GeoConv-用于点云的即插即用卷积模块

论文地址:https://arxiv.org/abs/2003.03055

(3) PnP-3D: A Plug-and-Play for 3D Point Clouds(PnP-3D:用于 3D 点云的即插即用)——PnP-3D-增强点云网络性能的即插即用模块

论文地址:https://arxiv.org/abs/2108.07378
代码地址:https://github.com/ShiQiu0419/pnp-3d

(4) Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis(参数不是您所需要的全部:从 3D 点云分析的非参数网络开始)——point-nn-即插即用模块应用于点云领域

论文地址:https://arxiv.org/abs/2303.08134
代码地址:https://github.com/ZrrSkywalker/Point-NN

(5) PF-Net: Point Fractal Network for 3D Point Cloud Completion(PF-Net:用于 3D 点云完成的点分形网络)——PFNet-来自点云补全的即插即用模块

论文地址:https://arxiv.org/abs/2003.00410
代码地址:https://github.com/zztianzz/PF-Net-Point-Fractal-Network?tab=readme-ov-file

(6) PRA-Net: Point Relation-Aware Network for 3D Point Cloud Analysis(PRA-Net:用于 3D 点云分析的点关系感知网络)——ISL(区域内结构学习)-用于点云任务的即插即用模块!

论文地址:https://arxiv.org/abs/2112.04903
代码地址:https://github.com/XiwuChen/PRA-Net

(7) KPConv: Flexible and Deformable Convolution for Point Clouds(KPConv:用于点云的灵活且可变形的卷积)——KpconvEncoder-基于点的特征提取,适用于点云领域

论文地址:https://arxiv.org/abs/1904.08889
代码地址:https://github.com/HuguesTHOMAS/KPConv

7. 卷积模块

(1) DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention(DEA-Net:基于细节增强卷积和内容引导注意力的单图像去雾)——基本可以应用于CV所有2d任务

论文地址:https://arxiv.org/abs/2301.04805
代码地址:https://github.com/cecret3350/DEA-Net

(2) Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition(用于基于骨骼的动作识别的通道拓扑细化图卷积)——CTR-GC-即插即用的通道拓扑细化图卷积用于骨架动作识别

论文地址:https://arxiv.org/abs/2107.12213
代码地址:https://github.com/Uason-Chen/CTR-GCN

(3) Wavelet Convolutions for Large Receptive Fields(大感受野的小波卷积)——小波变换卷积-来自ECCV2024!CV2维图像任务通用!

论文地址:https://arxiv.org/abs/2407.05848
代码地址:https://github.com/BGU-CS-VIL/WTConv

(4) TVConv: Efficient Translation Variant Convolution for Layout-aware Visual Processing(TVConv:用于布局感知视觉处理的高效翻译变体卷积)——TVConv(CVPR):可用于医学图像分割以及人脸识别的布局感知视觉处理的高效平移变体卷积

论文地址:https://arxiv.org/abs/2203.10489
代码地址:https://github.com/JierunChen/TVConv

(5) Dynamic Convolution: Attention over Convolution Kernels(动态卷积:卷积核上的注意力)——CVPR2020:包含一维,二维,三维的动态卷积,即插即用

论文地址:https://arxiv.org/abs/1912.03458
代码地址:https://github.com/kaijieshi7/Dynamic-convolution-Pytorch?tab=readme-ov-file

(6) Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition(金字塔卷积:重新思考用于视觉识别的卷积神经网络)——金字塔卷积,适用于几乎所有计算机视觉任务,即插即用

论文地址:https://arxiv.org/abs/2006.11538
代码地址:https://github.com/iduta/pyconv

(7) MDCR: A Dataset for Multi-Document Conditional Reasoning——多膨胀率通道卷积模块,即插即用,适用于目标检测等计算机视觉任务

论文地址:https://arxiv.org/abs/2406.11784
代码地址:https://github.com/peterbaile/mdcr?tab=readme-ov-file#mdcr-a-dataset-for-multi-document-conditional-reasoning

(8) CondConv: Conditionally Parameterized Convolutions for Efficient Inference(CondConv:用于高效推理的条件参数化卷积)——经典动态卷积,适用于几乎所有计算机视觉图像任务

论文地址:https://arxiv.org/abs/1904.04971
代码地址:https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv

(9) DO-Conv: Depthwise Over-parameterized Convolutional Layer(DO-Conv:深度过参数化卷积层)——深度超参数化卷积层,适用于计算机视觉图像处理任务,可以替代传统卷积

论文地址:https://arxiv.org/abs/2006.12030
代码地址:https://github.com/yangyanli/DO-Conv

(10) Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks(Run, Don’t Walk:追求更高的 FLOPS 以获得更快的神经网络)

论文地址:https://arxiv.org/abs/2303.03667
代码地址:https://github.com/JierunChen/FasterNet

(11) Complex matrix inversion via real matrix inversions——大核卷积下采样模块

论文地址:https://arxiv.org/abs/2208.01239
代码地址:https://github.com/zhen06/Complex-Matrix-Inversion

(12) Wavelet Convolutions for Large Receptive Fields(大感受野的小波卷积)——ECCV 2024 | 即插即用小波变换卷积|计算机视觉任务通用

论文地址:https://arxiv.org/abs/2407.05848
代码地址:https://github.com/BGU-CS-VIL/WTConv

(13) Dynamic Convolution: Attention over Convolution Kernels——深度学习 | CVPR 2024顶会 | 适用于CV所有任务

论文地址:https://arxiv.org/abs/1912.03458
代码地址:https://github.com/kaijieshi7/Dynamic-convolution-Pytorch

(14) LDConv: Linear deformable convolution for improving convolutional neural networks(LDConv:用于改进卷积神经网络的线性可变形卷积)——目标检测任务 | SCI 2024 |线性可变形即插即用卷积模块,所有CV任务通用卷积模块!

论文地址:https://arxiv.org/abs/2311.11587v3
代码地址:https://github.com/CV-ZhangXin/LDConv

(15) AKConv: Convolutional Kernel with Arbitrary Sampled Shapes and Arbitrary Number of Parameters(AKConv:具有任意采样形状和任意数量的参数的卷积内核)

论文地址:https://arxiv.org/abs/2311.11587v1
代码地址:https://github.com/DL-CNN/AKConv/blob/main/README.md

(16) Adaptive Rectangular Convolution for Remote Sensing Pansharpening(用于遥感全色锐化的自适应矩形卷积)——CVPR 2025

论文地址:https://arxiv.org/abs/2503.00467
代码地址:https://github.com/WangXueyang-uestc/ARConv

(17)Pinwheel-shaped Convolution and Scale-based Dynamic Loss for Infrared Small Target Detection(基于风车形卷积和基于尺度的动态损失在红外小目标检测中的应用)——【AAAI 2025】

论文地址:https://arxiv.org/abs/2412.16986
代码地址:https://github.com/JN-Yang/PConv-SDloss-Data

(18) BHViT: Binarized Hybrid Vision Transformer(BHViT:二值化混合视觉变压器)——【CVPR 2025】

论文地址:https://arxiv.org/abs/2503.02394
代码地址:https://github.com/IMRL/BHViT

(19) Efficient Frequency-Domain Image Deraining with Contrastive Regularization(基于对比度正则化的高效频域图像去雨)

论文地址:https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05751.pdf
代码地址:https://github.com/deng-ai-lab/FADformer

8. 视频预测

(1) SimVP: Simpler yet Better Video Prediction(SimVP:更简单但更好的视频预测)

论文地址:https://arxiv.org/abs/2206.05099
代码地址:https://github.com/ryok/SimVP-Simpler-yet-Better-Video-Prediction

9. 3D任务

(1) PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling(PoseBERT:用于时间 3D 人体建模的通用 Transformer 模块)——3D任务通用

论文地址:https://arxiv.org/abs/2208.10211
代码地址:https://github.com/naver/posebert

(2) A Geometric Knowledge Oriented Single-Frame 2D-to-3D Human Absolute Pose Estimation Method(一种面向几何知识的单帧 2D 到 3D 人体绝对姿态估计方法)——引入高维先验几何特征以提高模型效率和可解释性,用于三维人体姿态估计

论文地址:https://ieeexplore.ieee.org/document/10131895
代码地址:https://github.com/Humengxian/GKONet

(3) Beyond Self-Attention: Deformable Large Kernel Attention for Medical Image Segmentation(用于医学图像分割的可变形大核注意力)——WACV2024!适用于视觉3D任务的即插即用模块

论文地址:https://arxiv.org/abs/2309.00121
代码地址:https://github.com/xmindflow/deformableLKA

10. Mamba

(1) Convolutional State Space Models for Long-Range Spatiotemporal Modeling——Mamba Back!CV二维图像任务通用

论文地址:https://arxiv.org/abs/2310.19694
代码地址:https://github.com/NVlabs/ConvSSM

(2) nnMamba: 3D Biomedical Image Segmentation, Classification and Landmark Detection with State Space Model(nnMamba:使用状态空间模型进行 3D 生物医学图像分割、分类和地标检测)——用于CV 3维任务!

论文地址:https://arxiv.org/abs/2402.03526
代码地址:https://github.com/lhaof/nnMamba

(3) TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting(利用状态空间模型Mamba捕捉多元时间序列数据中的长期依赖关系)——用于时间序列任务!

论文地址:https://arxiv.org/abs/2403.09898
代码地址:https://github.com/Atik-Ahamed/TimeMachine

(4) MambaIR: A Simple Baseline for Image Restoration with State-Space Model(MambaIR:使用状态空间模型进行图像恢复的简单基线)——引入通道注意力和局部增强的即插即用Mamba模块

论文地址:https://arxiv.org/abs/2402.15648
代码地址:https://github.com/csguoh/MambaIR

(5) RSCaMa: Remote Sensing Image Change Captioning with State Space Model——RSCaMa(联合时空建模Mamba模块)-用于处理具有时空特性的数据,例如遥感图像变化检测、视频理解、时空预测等任务

论文地址:https://arxiv.org/abs/2404.18895
代码地址:https://github.com/Chen-Yang-Liu/RSCaMa

(6) MambaIR: A Simple Baseline for Image Restoration with State-Space Model——深度学习 | ECCV 2024 | mamba模块RSSG

论文地址:https://arxiv.org/abs/2402.15648
代码地址:https://github.com/csguoh/MambaIR

(7) Jamba: A Hybrid Transformer-Mamba Language Model(Jamba:混合 Transformer-Mamba 语言模型)——适用于CV任务和NLP 任务

论文地址:https://arxiv.org/abs/2403.19887
代码地址:https://huggingface.co/ai21labs/Jamba-v0.1

(8) VMamba: Visual State Space Model(VMamba:视觉状态空间模型)——图像分割任务 | 并行化视觉PVMamba即插即用模块,适用于医学图像分割任务,计算机视觉CV任务通用模块

论文地址:https://arxiv.org/abs/2401.10166
代码地址:https://github.com/MzeroMiko/VMamba

(9) CM-UNet: Hybrid CNN-Mamba UNet for Remote Sensing Image Semantic Segmentation(CM-UNet: 用于遥感图像语义分割的混合 CNN-Mamba UNet)——语义分割任务 | CSMamba解码器即插即用模块,适用于遥感语义分割任务,图像分割、目标检测等CV所有任务通用模块

论文地址:https://arxiv.org/abs/2405.10530
代码地址:https://github.com/XiaoBuL/CM-UNet

(10) SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image Segmentation(SegMamba: 用于 3D 医学图像分割的长距离顺序建模 Mamba)——可以应用在Mamba中的卷积模块,3d和2d版本,即插即用

论文地址:https://arxiv.org/html/2401.13560v3
代码地址:https://github.com/ge-xing/SegMamba

(11) MobileMamba: Lightweight Multi-Receptive Visual Mamba Network(MobileMamba:轻量级多接受性 Visual Mamba 网络)

论文地址:https://arxiv.org/abs/2411.15941
代码地址:https://github.com/lewandofskee/MobileMamba

(12) Wavelet-based Mamba with Fourier Adjustment for Low-light Image Enhancement(基于小波的 Mamba 与傅里叶调整在低照度图像增强中的应用)

论文地址:https://arxiv.org/abs/2410.20314
代码地址:https://github.com/mcpaulgeorge/WalMaFa
在这里插入图片描述

(13) MambaOut: Do We Really Need Mamba for Vision?——CVPR 2025

论文地址:https://arxiv.org/abs/2405.07992
代码地址:https://arxiv.org/abs/2410.20314

(14) EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality(EfficientViM:具有基于 Hidden State Mixer 的状态空间对偶性的高效视觉 Mamba)

论文地址:https://arxiv.org/abs/2411.15241
代码地址:https://github.com/mlvlab/EfficientViM

11. 扩散模型

(1) FreeU: Free Lunch in Diffusion U-Net——Free_UNet-无需训练,即插即用于扩散模型的改进U-Net结构
论文地址:https://arxiv.org/abs/2309.11497
代码地址:https://chenyangsi.top/FreeU/

12. 多模态

(1) PS-Mixer: A Polar-Vector and Strength-Vector Mixer Model for Multimodal Sentiment Analysis(PS-Mixer:用于多模态情感分析的极向量和强度向量混合器模型)——使得不同模态特征在水平和垂直方向上充分交互

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S0306457322003302
代码地址:https://github.com/metaphysicser/PS-Mixer

(2) Bi-directional Adapter for Multi-modal Tracking(用于多模态跟踪的双向适配器)——Bi_direct_adapter(通用双向适配器)-来自AAAI2024!适用于多模态领域

论文地址:https://arxiv.org/abs/2312.10611
代码地址:https://github.com/SparkTempest/BAT

13. KAN

(1) KAN: Kolmogorov–Arnold Networks(Kolmogorov-Arnold 网络 (KAN))——KAN-缝合具体操作】

论文地址:https://arxiv.org/abs/2404.19756
代码地址:https://github.com/KindXiaoming/pykan

(2) SCKansformer: Fine-Grained Classification of Bone Marrow Cells via Kansformer Backbone and Hierarchical Attention Mechanisms(SCKansformer:通过 Kansformer Backbone 和分层注意力机制对骨髓细胞进行细粒度分类)——KAN+SCConv | SCKansformer

论文地址:https://arxiv.org/abs/2406.09931
代码地址:https://github.com/JustlfC03/SCKansformer

14. 上采样

(1) Learning to Upsample by Learning to Sample——ICCV23CV | 2维图像通用

论文地址:https://arxiv.org/abs/2308.15085
代码地址:https://github.com/tiny-smart/dysample

15. NLP

(1) CorNET: Deep Learning Framework for PPG-Based Heart Rate Estimation and Biometric Identification in Ambulant Environment(CorNET:在救护车环境中进行基于 PPG 的心率估计和生物特征识别的深度学习框架)——CorNet(即插即用NLP模块)-学习标签相关性,利用相关性知识输出增强标签预测

论文地址:https://ieeexplore.ieee.org/document/8607019

16. 语音识别

(1) FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information(FAdam: Adam 是一个使用对角经验 Fisher 信息的自然梯度优化器)——FAdam-即插即用优化器,适用于语音识别,NLP,CV领域

论文地址:https://arxiv.org/abs/2405.12807
代码地址:https://github.com/lessw2020/FAdam_PyTorch

17. 人体姿态估计

(1) SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos(SmoothNet:用于优化视频中人体姿势的即插即用网络)——SmoothNet(ECCV2022):适用于人体姿态估计领域的即插即用模块,可以跟任意的2D和3D姿态估计网络进行组合

论文地址:https://arxiv.org/abs/2112.13715
代码地址:https://github.com/cure-lab/SmoothNet

18. 特征融合

(1) Dynamic Feature Fusion for Semantic Edge Detection(用于语义边缘检测的动态特征融合)——DFF:可以缝合在transformer中的动态特征融合模块,适用于2D和3D分割任务

论文地址:https://arxiv.org/abs/1902.09104
代码地址:https://github.com/Lavender105/DFF

(2) HCF-Net: Hierarchical Context Fusion Network for Infrared Small Object Detection(HCF-Net: 用于红外小目标检测的分层上下文融合网络)——DASI:一种特征融合模块,即插即用,适用于目标检测等计算机视觉领域

论文地址:https://arxiv.org/abs/2403.10778
代码地址:https://github.com/zhengshuchen/HCFNet

(3) PnPNet: Pull-and-Push Networks for Volumetric Segmentation with Boundary Confusion(PnPNet:用于边界混淆的体积分割的拉推网络)——SDM:特征融合模块,即插即用,2D和3D均适用

论文地址:https://arxiv.org/html/2312.08323v1
代码地址:https://github.com/AlexYouXin/PnPNet

(4) DS-TransUNet:Dual Swin Transformer U-Net for Medical Image Segmentation(DS-TransUNet:用于医疗图像分割的双 Swin Transformer U-Net)——TIF:特征融合模块,可以在跳跃连接过程中使用

论文地址:https://arxiv.org/abs/2106.06716
代码地址:https://github.com/TianBaoGe/DS-TransUNet

(5) Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network(高级视觉任务循环中的图像融合:语义感知的实时红外和可见光图像融合网络)——SFFusion:特征融合模块,2d和3d版本,附创新点的启发,提取和融合浅层特征

论文地址:https://www.sciencedirect.com/science/article/abs/pii/S1566253521002542?via%3Dihub
代码地址:https://github.com/Linfeng-Tang/SeAFusion

(6) DEA-Net: Single Image Dehazing Based on Detail-Enhanced Convolution and Content-Guided Attention(DEA-Net:基于细节增强卷积和内容引导注意力的单图像去雾)——图像去雾任务 | TIP2024顶会 | 低级特征和高级特征融合CGAFusion即插即用模块

论文地址:https://ieeexplore.ieee.org/document/10411857
代码地址:https://github.com/cecret3350/DEA-Net

(7) DuAT: Dual-Aggregation Transformer Network for Medical Image Segmentation(DuAT:用于医疗图像分割的双聚合 Transformer 网络)——PRCV2023 | 全局和局部空间特征融合GLSA即插即用模块,适用于医学图像分割、小目标检测、暗光增强,CV所有任务通用

论文地址:https://arxiv.org/abs/2212.11677
代码地址:https://github.com/Barrett-python/DuAT

(8) ConDSeg: A General Medical Image Segmentation Framework via Contrast-Driven Feature Enhancement(ConDSeg:通过对比度驱动特征增强的通用医学图像分割框架)——AAAI 2025

论文地址:https://arxiv.org/abs/2412.08345
代码地址:https://github.com/Mengqi-Lei/ConDSeg

19. AI+医学

(1) CLEEGN: A Convolutional Neural Network for Plug-and-Play Automatic EEG Reconstruction(CLEEGN:用于即插即用自动 EEG 重建的卷积神经网络)——CLEEGN:即插即用模块,适用于自动脑电图信号(EEG)重建,来自于24年2月份的论文

论文地址:https://arxiv.org/abs/2210.05988
代码地址:https://github.com/CECNL/CLEEGN

(2) GLSANet: Global-Local Self-Attention Network for Remote Sensing Image Semantic Segmentation(GLSANet: 用于遥感图像语义分割的全局-局部自注意力网络)——(PRCV 2023)深度学习即插即用空间聚合模块GLSA,医学图像分割

论文地址:https://ieeexplore.ieee.org/document/10011428
代码地址:https://github.com/EvilGhostY/MMRSSeg/blob/main/models/CGGLNet.py

(3) SvANet: A Scale-variant Attention-based Network for Small Medical Object Segmentation(SvANet:用于小型医疗对象分割的基于规模变体注意力的网络)——小型医学对象分割网络

论文地址:https://arxiv.org/html/2407.07720v1
代码地址:https://github.com/anthonyweidai/SvANet

(4) EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation(EMCAD:用于医学图像分割的高效多尺度卷积注意力解码)——CVPR 2024 | 医学图像分割 | 高效多尺度卷积注意力解码器EMCAD

论文地址:https://arxiv.org/abs/2405.06880
代码地址:https://github.com/SLDGroup/EMCAD

(5) DEFN: Dual-Encoder Fourier Group Harmonics Network for Three-Dimensional Indistinct-Boundary Object Segmentation(DEFN: 用于三维模糊边界对象分割的双编码器傅里叶群谐波网络)——三维医学图像分割和重建 | DEFN

论文地址:https://arxiv.org/abs/2311.00483
代码地址:https://github.com/IMOP-lab/DEFN-pytorch

(6) MSA 2 ^2 2Net: Multi-scale Adaptive Attention-guided Network for Medical Image Segmentation(用于医学图像分割的多尺度自适应注意力引导网络)——BMVC 2024 | 医学图像分割 | 多尺度自适应空间注意力门控MASAG

论文地址:https://arxiv.org/abs/2407.21640
代码地址:https://github.com/xmindflow/MSA-2Net

(7) Vision-LSTM: xLSTM as Generic Vision Backbone(xLSTM 作为通用视觉主干)——适用于医学图像分割领域

论文地址:https://arxiv.org/abs/2406.04303
代码地址:https://nx-ai.github.io/vision-lstm/

20. transformer或unet中用

(1) Dual Attention Network for Scene Segmentation(场景分割的双注意力网络)——DA_Block:即插即用模块,可缝合在transformer或unet中

论文地址:https://arxiv.org/abs/1809.02983
代码地址:https://github.com/junfu1115/DANet

21. 图像恢复

(1) Simple Baselines for Image Restoration(图像恢复的简单基线)——NAF:即插即用模块,适用于图像恢复领域

论文地址:https://arxiv.org/abs/2204.04676
代码地址:https://github.com/megvii-research/NAFNet

(2) DSAM: A deep learning framework for analyzing temporal and spatial dynamics in brain networks(DSAM:用于分析大脑网络中时间和空间动态的深度学习框架)——适用于图像恢复任务的注意力模块

论文地址:https://www.sciencedirect.com/science/article/pii/S1361841525000106
代码地址:https://github.com/bishalth01/DSAM
.
(3) Restoring Images in Adverse Weather Conditions via Histogram Transformer(通过 Histogram Transformer 在恶劣天气条件下恢复图像)

论文地址:https://arxiv.org/abs/2407.10172
代码地址:https://github.com/sunshangquan/Histoformer

(4) Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration(用于图像恢复的具有细心特征改进的自适应稀疏变换器)——CVPR 2024 | 图像恢复

论文地址:https://ieeexplore.ieee.org/document/10657913
代码地址:https://github.com/joshyZhou/AST

22. 轻量化

(1) MobileNetV4 – Universal Models for the Mobile Ecosystem(MobileNetV4 – 移动生态系统的通用模型)——MobileNetV4来啦,其中的UIB块

论文地址:https://arxiv.org/abs/2404.10518
代码地址:https://github.com/tensorflow/models/blob/master/official/vision/modeling/backbones/mobilenet.py

…持续更新!!!
(1)
论文地址
代码地址

相关文章:

  • Android 绘制折线图
  • C#学习第24天:程序集和部署
  • 【MySQL】03.库操作与表操作
  • 黑马点评相关知识总结
  • 本征半导体与杂质半导体
  • Redis中的事务和原子性
  • DockerHub被封禁,怎么将镜像传到国内?一种简单合规的镜像同步到国内方案[最佳实践]
  • 物流项目第三期(统一网关、工厂模式运用)
  • 内网穿透:轻松实现外网访问本地服务
  • 101个α因子#8
  • 新凌印 4.2.0 | 国内短视频去水印下载~图集下载
  • 【数据结构】队列的完整实现
  • Brooks Polycold快速循环水蒸气冷冻泵客户使用手含电路图,适用于真空室应用
  • AI能源危机:人工智能发展与环境可持续性的矛盾与解决之道
  • 深入剖析Zynq AMP模式下CPU1中断响应机制:从原理到创新实践
  • addStretch 与addSpace的区别
  • base算法
  • 【YOLOs-CPP-图像分类部署】01-构建项目
  • bi报表是什么意思?如何制作一张bi报表?
  • 丝杆支撑座:机床生命周期的精度与效能
  • 四川省外卖骑手接单将不再强制要求上传健康证
  • 太平人寿党委书记赵峰调任海南省政府党组成员
  • 国家发改委:系统谋划7方面53项配套举措,推动民营经济促进法落地见效
  • 完善劳动关系协商协调机制,《共同保障劳动者合法权益工作指引》发布
  • “当代阿炳”甘柏林逝世,创办了国内第一所残疾人高等学府
  • 新华社千笔楼:地方文旅宣传应走出“魔性尬舞”的流量焦虑