当前位置: 首页 > news >正文

基于PyTorch的CBOW模型实现

引言:语义表征的革命性演进 自然语言处理(NLP)领域经历了从基于规则到分布式表征的范式转变,其中最具突破性的进展当属语义的连续向量表示。在Word2Vec问世前,传统文本表征方法如TF-IDF和one-hot编码存在显著缺陷:

维度困境:向量维度与词汇表规模成正比,处理海量文本时维度可能突破百万级 语义缺失:难以捕获词语间的语义关联(如"国王"与"女王"的性别对应关系) 稀疏性问题:向量中绝大多数元素为零值,导致计算资源浪费

本文将系统阐述CBOW模型的数学原理,并给出完整的PyTorch实现方案,涵盖从数据预处理到词向量应用的全流程解析。

一、CBOW模型的理论基础

1.1 模型架构设计思想

CBOW(Continuous Bag-of-Words)模型的核心创新在于通过上下文预测中心词。相较于传统n-gram模型仅统计词语共现频率,CBOW通过学习词语的分布式表示来捕捉深层语义特征。

模型输入输出规范

  • 输入:包含2m个上下文词的索引序列
  • 输出:中心词在词汇表中的概率分布
  • 优化目标:最大化对数似然函数:max∑[t=1→T]logP(w_t|w_t-m,...,w_t-1,w_t+1,...,w_t+m)

1.2 数学模型推导

给定词汇表大小V,词向量维度d,上下文窗口大小2m。

前向传播流程

  1. 词嵌入层: v_i = E·onehot(w_i) ∀i∈context 其中E∈R^{V×d}为词嵌入矩阵

  2. 上下文聚合: 可采用两种聚合方式:

    • 平均聚合:h = (1/2m)∑[i=1→2m]v_i
    • 求和聚合:h = ∑[i=1→2m]v_i
  3. 输出层计算: z = Wh + b 其中W∈R^{V×d}为输出权重矩阵

  4. 概率归一化: P(w_t|context) = softmax(z)_t = exp(z_t)/∑[j=1→V]exp(z_j)

1.3 损失函数与优化

目标函数: 采用负对数似然损失: L = -∑[t=1→T]logP(w_t|context_t)

参数优化: 通过反向传播计算梯度: ∂L/∂E = (∂L/∂z)·(∂z/∂E)

二、完整代码实现

2.1 环境配置与依赖管理

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import numpy as np
from collections import Counter, defaultdict
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from tqdm import tqdm
import logging
import json# 配置日志
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)# 设备配置
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu")
logger.info(f"使用设备: {device}")

2.2 高级数据预处理模块

class TextProcessor:"""增强的文本预处理类"""def __init__(self, min_freq=2, window_size=2, unk_token="<UNK>"):self.min_freq = min_freqself.window_size = window_sizeself.unk_token = unk_tokenself.word2idx = {}self.idx2word = {}self.vocab_size = 0def build_vocab(self, texts):"""构建词汇表,支持低频词过滤"""# 词频统计word_freq = Counter()for text in texts:words = self.tokenize(text)word_freq.update(words)# 过滤低频词valid_words = {word for word, freq in word_freq.items() if freq >= self.min_freq}valid_words.add(self.unk_token)# 构建映射表self.word2idx = {word: idx for idx, word in enumerate(valid_words)}self.idx2word = {idx: word for idx, word in enumerate(valid_words)}self.vocab_size = len(valid_words)logger.info(f"词汇表构建完成,大小: {self.vocab_size}")def tokenize(self, text):"""文本分词处理"""# 简单的空格分词,可扩展为更复杂的分词逻辑return text.lower().split()def encode(self, words):"""将词语序列转换为索引序列"""return [self.word2idx.get(word, self.word2idx[self.unk_token]) for word in words]def decode(self, indices):"""将索引序列转换回词语序列"""return [self.idx2word[idx] for idx in indices]def generate_training_data(self, texts):"""生成CBOW训练数据"""data = []for text in texts:words = self.tokenize(text)indices = self.encode(words)for i in range(self.window_size, len(indices) - self.window_size):# 上下文索引context = (indices[i-self.window_size:i] + indices[i+1:i+self.window_size+1])# 目标词索引target = indices[i]data.append((context, target))logger.info(f"生成训练样本数: {len(data)}")return dataclass CBOWDataset(Dataset):"""PyTorch数据加载器"""def __init__(self, data):self.data = datadef __len__(self):return len(self.data)def __getitem__(self, idx):context, target = self.data[idx]return torch.tensor(context, dtype=torch.long), torch.tensor(target, dtype=torch.long)

2.3 增强的CBOW模型实现

class EnhancedCBOW(nn.Module):"""增强的CBOW模型,支持多种聚合方式和正则化"""def __init__(self, vocab_size, embedding_dim=100, aggregation='mean', dropout_rate=0.1, use_batch_norm=False):super().__init__()self.vocab_size = vocab_sizeself.embedding_dim = embedding_dimself.aggregation = aggregation# 词嵌入层self.embeddings = nn.Embedding(vocab_size, embedding_dim)# 可选的批归一化self.batch_norm = nn.BatchNorm1d(embedding_dim) if use_batch_norm else None# Dropout正则化self.dropout = nn.Dropout(dropout_rate)# 输出层self.output = nn.Linear(embedding_dim, vocab_size)# 初始化权重self._init_weights()def _init_weights(self):"""Xavier均匀初始化"""nn.init.xavier_uniform_(self.embeddings.weight)nn.init.xavier_uniform_(self.output.weight)nn.init.zeros_(self.output.bias)def forward(self, context_words):"""前向传播Args:context_words: [batch_size, context_size]Returns:log_probs: [batch_size, vocab_size]"""# 词嵌入查找 [batch_size, context_size, embedding_dim]embeds = self.embeddings(context_words)# 上下文聚合if self.aggregation == 'mean':context_vec = torch.mean(embeds, dim=1)  # [batch_size, embedding_dim]elif self.aggregation == 'sum':context_vec = torch.sum(embeds, dim=1)elif self.aggregation == 'max':context_vec, _ = torch.max(embeds, dim=1)else:raise ValueError(f"不支持的聚合方式: {self.aggregation}")# 批归一化if self.batch_norm is not None:context_vec = self.batch_norm(context_vec)# Dropout正则化context_vec = self.dropout(context_vec)# 输出层output = self.output(context_vec)  # [batch_size, vocab_size]# Log softmax用于NLLLosslog_probs = F.log_softmax(output, dim=1)return log_probsdef get_word_vector(self, word_idx):"""获取单个词的向量"""with torch.no_grad():return self.embeddings(torch.tensor([word_idx])).squeeze().numpy()def get_similar_words(self, word_idx, top_k=10):"""查找相似词语"""with torch.no_grad():word_vec = self.embeddings(torch.tensor([word_idx]))# 计算余弦相似度embeddings_norm = F.normalize(self.embeddings.weight, p=2, dim=1)word_vec_norm = F.normalize(word_vec, p=2, dim=1)similarities = torch.mm(word_vec_norm, embeddings_norm.t()).squeeze()# 获取最相似的词语(排除自身)top_indices = torch.topk(similarities, top_k + 1, largest=True).indices[1:]return top_indices.tolist()

2.4 训练框架与实验管理

class CBOWTrainer:"""CBOW模型训练器"""def __init__(self, model, train_loader, val_loader=None):self.model = model.to(device)self.train_loader = train_loaderself.val_loader = val_loaderself.optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(self.optimizer, mode='min', patience=3, factor=0.5, verbose=True)self.criterion = nn.NLLLoss()self.train_losses = []self.val_losses = []self.best_val_loss = float('inf')def train_epoch(self):"""训练一个epoch"""self.model.train()total_loss = 0progress_bar = tqdm(self.train_loader, desc="训练")for batch_idx, (context, target) in enumerate(progress_bar):context, target = context.to(device), target.to(device)self.optimizer.zero_grad()output = self.model(context)loss = self.criterion(output, target)loss.backward()# 梯度裁剪torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=5.0)self.optimizer.step()total_loss += loss.item()# 更新进度条if batch_idx % 10 == 0:progress_bar.set_postfix({'loss': f'{loss.item():.4f}'})avg_loss = total_loss / len(self.train_loader)self.train_losses.append(avg_loss)return avg_lossdef validate(self):"""验证集评估"""if self.val_loader is None:return Noneself.model.eval()total_loss = 0with torch.no_grad():for context, target in self.val_loader:context, target = context.to(device), target.to(device)output = self.model(context)loss = self.criterion(output, target)total_loss += loss.item()avg_loss = total_loss / len(self.val_loader)self.val_losses.append(avg_loss)return avg_lossdef train(self, epochs, save_path=None):"""完整训练流程"""logger.info("开始训练...")for epoch in range(epochs):# 训练阶段train_loss = self.train_epoch()# 验证阶段val_loss = self.validate()# 学习率调度if val_loss is not None:self.scheduler.step(val_loss)# 保存最佳模型if val_loss is not None and val_loss < self.best_val_loss:self.best_val_loss = val_lossif save_path:self.save_model(save_path)logger.info(f"保存最佳模型,验证损失: {val_loss:.4f}")# 日志记录log_msg = f"Epoch {epoch+1:03d}/{epochs} | 训练损失: {train_loss:.4f}"if val_loss is not None:log_msg += f" | 验证损失: {val_loss:.4f}"logger.info(log_msg)def save_model(self, path):"""保存模型和词汇表"""torch.save({'model_state_dict': self.model.state_dict(),'vocab_size': self.model.vocab_size,'embedding_dim': self.model.embedding_dim,'aggregation': self.model.aggregation}, path)def plot_training_curve(self):"""绘制训练曲线"""plt.figure(figsize=(10, 6))plt.plot(self.train_losses, label='训练损失')if self.val_losses:plt.plot(self.val_losses, label='验证损失')plt.xlabel('Epoch')plt.ylabel('损失')plt.title('CBOW模型训练曲线')plt.legend()plt.grid(True)plt.show()

2.5 词向量分析与可视化

class WordVectorAnalyzer:"""词向量分析工具类"""def __init__(self, model, word2idx, idx2word):self.model = modelself.word2idx = word2idxself.idx2word = idx2wordself.embeddings = model.embeddings.weight.detach().cpu().numpy()def visualize_embeddings(self, words=None, perplexity=30, n_iter=1000):"""使用t-SNE可视化词向量"""if words is None:# 随机选择部分词语进行可视化words = list(self.word2idx.keys())[:100]indices = [self.word2idx[word] for word in words if word in self.word2idx]word_vectors = self.embeddings[indices]# t-SNE降维tsne = TSNE(n_components=2, perplexity=perplexity, n_iter=n_iter, random_state=42)vectors_2d = tsne.fit_transform(word_vectors)# 绘制散点图plt.figure(figsize=(15, 12))plt.scatter(vectors_2d[:, 0], vectors_2d[:, 1], alpha=0.7)# 添加文本标注for i, word in enumerate(words):if word in self.word2idx:plt.annotate(word, (vectors_2d[i, 0], vectors_2d[i, 1]), alpha=0.8, fontsize=9)plt.title("CBOW词向量可视化 (t-SNE)")plt.xlabel("t-SNE特征1")plt.ylabel("t-SNE特征2")plt.grid(True, alpha=0.3)plt.show()def find_similar_words(self, word, top_k=10):"""查找语义相似的词语"""if word not in self.word2idx:return f"词语 '{word}' 不在词汇表中"word_idx = self.word2idx[word]similar_indices = self.model.get_similar_words(word_idx, top_k)similar_words = [self.idx2word[idx] for idx in similar_indices]return similar_wordsdef analogy_test(self, word_a, word_b, word_c, top_k=5):"""词语类比测试: word_a - word_b + word_c ≈ ? """if not all(w in self.word2idx for w in [word_a, word_b, word_c]):return "有词语不在词汇表中"vec_a = self.embeddings[self.word2idx[word_a]]vec_b = self.embeddings[self.word2idx[word_b]]vec_c = self.embeddings[self.word2idx[word_c]]# 计算类比向量analogy_vec = vec_a - vec_b + vec_c# 查找最相似的词语similarities = np.dot(self.embeddings, analogy_vec) / (np.linalg.norm(self.embeddings, axis=1) * np.linalg.norm(analogy_vec))# 排除输入词语exclude_indices = [self.word2idx[w] for w in [word_a, word_b, word_c]]similarities[exclude_indices] = -1top_indices = np.argsort(similarities)[-top_k:][::-1]results = [(self.idx2word[idx], similarities[idx]) for idx in top_indices]return resultsdef save_embeddings(self, filepath, format='txt'):"""保存词向量到文件"""if format == 'txt':with open(filepath, 'w', encoding='utf-8') as f:f.write(f"{len(self.word2idx)} {self.embeddings.shape[1]}\n")for word, idx in self.word2idx.items():vector_str = ' '.join(map(str, self.embeddings[idx]))f.write(f"{word} {vector_str}\n")elif format == 'npy':np.save(filepath, self.embeddings)logger.info(f"词向量已保存到: {filepath}")

http://www.dtcms.com/a/483295.html

相关文章:

  • 浙江网站建站如何进行电子商务网站推广?
  • 怎么做服务器网站中国半导体设备
  • C++11的特性与新语法(下)
  • 聊城市建设局网站最新清远发布
  • 个人网站 不用备案吗python做网站实例
  • GPIO 引脚速度(Speed)
  • Spring 方法注入机制深度解析:Lookup与Replace Method原理与应用
  • 一套三维研发设计软件可以多人共享的解决方案
  • 网站模板使用网上怎么样挣钱
  • 企业网站设计制作 公司网站建设服务
  • 数据库系统安全机制设立
  • 简述网站建设基本步骤夸克观看免费视频
  • 985建设网站专业网站设计制作服务
  • 单位做网站我的网站要怎样做才能让人家搜到
  • 服务器高效操作指南:Python 环境退出与 Linux 终端快捷键全解析
  • 台州专业网站建设济南莱芜最新消息
  • 网站服务器地址在哪里看通用企业手机网站模板
  • 海口网站模板系统深圳网站建设定制开发
  • 【数据结构——最小生成树与Kruskal】
  • 前端开发中 SCSS 变量与 CSS 变量的区别与实践选择,—— 两种变量别混为一谈
  • JS基础事件处理与CSS常用属性全解析(附实战示例)
  • wordpress主题删不掉辽宁seo推广公司
  • 网站制作镇江网站常见错误
  • JavaScript 的try使用方法和应用场景
  • 网站建设页面设计规格免费论坛申请无广告
  • 【课堂笔记】LU分解,Cholesky分解
  • 巴中做网站政务网站模版
  • Ubuntu /usr/include/x86_64-linux-gnu目录的作用浅谈
  • 当“养鲜”遇见“小说家”:容声打造跨越虚实的养鲜宇宙
  • 设计模式篇之 命令模式 Command