当前位置: 首页 > news >正文

中国建设人才信息网站官网网络推广策略概念

中国建设人才信息网站官网,网络推广策略概念,如何为网站做优化,注册公司十大忌讳原理讲解 【Transformer系列(2)】注意力机制、自注意力机制、多头注意力机制、通道注意力机制、空间注意力机制超详细讲解 自注意力机制 import torch import torch.nn as nn# 自注意力机制 class SelfAttention(nn.Module):def __init__(self, input…

原理讲解

【Transformer系列(2)】注意力机制、自注意力机制、多头注意力机制、通道注意力机制、空间注意力机制超详细讲解

自注意力机制

import torch
import torch.nn as nn# 自注意力机制
class SelfAttention(nn.Module):def __init__(self, input_dim):super(SelfAttention, self).__init__()self.query = nn.Linear(input_dim, input_dim)self.key = nn.Linear(input_dim, input_dim)self.value = nn.Linear(input_dim, input_dim)        def forward(self, x, mask=None):batch_size, seq_len, input_dim = x.shapeq = self.query(x)k = self.key(x)v = self.value(x)atten_weights = torch.matmul(q, k.transpose(-2, -1)) / torch.sqrt(torch.tensor(input_dim, dtype=torch.float))if mask is not None:mask = mask.unsqueeze(1)attn_weights = attn_weights.masked_fill(mask == 0, float('-inf'))        atten_scores = torch.softmax(atten_weights, dim=-1)attented_values = torch.matmul(atten_scores, v)return attented_values# 自动填充函数
def pad_sequences(sequences, max_len=None):batch_size = len(sequences)input_dim = sequences[0].shape[-1]lengths = torch.tensor([seq.shape[0] for seq in sequences])max_len = max_len or lengths.max().item()padded = torch.zeros(batch_size, max_len, input_dim)for i, seq in enumerate(sequences):seq_len = seq.shape[0]padded[i, :seq_len, :] = seqmask = torch.arange(max_len).expand(batch_size, max_len) < lengths.unsqueeze(1)return padded, mask.long()if __name__ == '__main__':batch_size = 2seq_len = 3input_dim = 128seq_len_1 = 3seq_len_2 = 5x1 = torch.randn(seq_len_1, input_dim)            x2 = torch.randn(seq_len_2, input_dim)target_seq_len = 10    padded_x, mask = pad_sequences([x1, x2], target_seq_len)selfattention = SelfAttention(input_dim)    attention = selfattention(padded_x)print(attention)

多头自注意力机制

import torch
import torch.nn as nn# 定义多头自注意力模块
class MultiHeadSelfAttention(nn.Module):def __init__(self, input_dim, num_heads):super(MultiHeadSelfAttention, self).__init__()self.num_heads = num_headsself.head_dim = input_dim // num_headsself.query = nn.Linear(input_dim, input_dim)self.key = nn.Linear(input_dim, input_dim)self.value = nn.Linear(input_dim, input_dim)        def forward(self, x, mask=None):batch_size, seq_len, input_dim = x.shape# 将输入向量拆分为多个头## transpose(1,2)后变成 (batch_size, self.num_heads, seq_len, self.head_dim)形式q = self.query(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)k = self.key(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)v = self.value(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)# 计算注意力权重attn_weights = torch.matmul(q, k.transpose(-2, -1)) / torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32))# 应用 padding maskif mask is not None:# mask: (batch_size, seq_len) -> (batch_size, 1, 1, seq_len) 用于广播mask = mask.unsqueeze(1).unsqueeze(2)  # 扩展维度以便于广播attn_weights = attn_weights.masked_fill(mask == 0, float('-inf'))        attn_scores = torch.softmax(attn_weights, dim=-1)# 注意力加权求和attended_values = torch.matmul(attn_scores, v).transpose(1, 2).contiguous().view(batch_size, seq_len, input_dim)return attended_values# 自动填充函数
def pad_sequences(sequences, max_len=None):batch_size = len(sequences)input_dim = sequences[0].shape[-1]lengths = torch.tensor([seq.shape[0] for seq in sequences])max_len = max_len or lengths.max().item()padded = torch.zeros(batch_size, max_len, input_dim)for i, seq in enumerate(sequences):seq_len = seq.shape[0]padded[i, :seq_len, :] = seqmask = torch.arange(max_len).expand(batch_size, max_len) < lengths.unsqueeze(1)return padded, mask.long()if __name__ == '__main__':heads = 2batch_size = 2seq_len_1 = 3seq_len_2 = 5input_dim = 128x1 = torch.randn(seq_len_1, input_dim)            x2 = torch.randn(seq_len_2, input_dim)target_seq_len = 10    padded_x, mask = pad_sequences([x1, x2], target_seq_len)multiheadattention = MultiHeadSelfAttention(input_dim, heads)attention = multiheadattention(padded_x, mask)    print(attention)
http://www.dtcms.com/a/554434.html

相关文章:

  • VSCode插件开发实战:从零到发布的技术大纲
  • 做旅游网站能成功网页设计图片素材关于设计
  • 如何自己建网站中牟县建设局网站
  • 网站建设描述怎么写高级网页设计师证
  • 华为云iot mqtt 异常停止消费
  • go-mysql-transfer 伪装从库实现 MySQL 到 Redis 数据同步(完整配置)
  • 重庆做网站建设哪家好destoon 手机网站模板
  • 自己建的网站能赚钱吗小程序定制开发解决方案
  • 论文笔记(九十七)PhysiAgent: An Embodied Agent Framework in Physical World
  • 4个可落地执行方法,深挖用户需求!
  • unity DoTween DoPath设置物体按照指定轨迹运动
  • 成都网站开发建设公司在网站加上一个模块怎么做
  • 企业网站开发服务器世界建设企业网站
  • 【VLNs篇】13:JanusVLN 数据说明
  • 打印机共享维护工具
  • 做钢管的去什么网站发信息wordpress插件选项
  • 【RPA教学】E-mail
  • 郑州网站设计 品牌 视觉中国教育建设协会网站
  • 过年做啥网站致富加强网站制度建设
  • 网站打不开 其它能打开怎么做网站 新手做网站
  • 记录:用proxy解决前后端跨域限制问题
  • ps如何做网站横幅网站seo优化多少钱
  • 构建AI智能体:八十、SVD知识整理与降维:从数据混沌到语义秩序的智能转换
  • 【Docker Desktop+wsl+Hyper-V】下载与安装(Windows系统Docker环境)
  • 网站管理规范朔州企业网站建设公司
  • Win10开机自启动怎么设置?关闭开机启动6大方法
  • 网站宣传语女装标题优化关键词
  • 国内网站建设推荐建立平台的步骤
  • 记一次redis主从切换导致的数据丢失与陷入只读状态故障
  • Anthropic 最新研究深度解析:大型语言模型中涌现的内省意识