当前位置: 首页 > wzjs >正文

关键词在线优化乐天seo培训中心

关键词在线优化,乐天seo培训中心,公司网站建设需要准备哪些资料,网站推广🍨 本文为🔗365天深度学习训练营 中的学习记录博客🍖 原作者:K同学啊 | 接辅导、项目定制 一、我的环境 1.语言环境:Python 3.8 2.编译器:Pycharm 3.深度学习环境: torch1.12.1cu113torchvision…
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

一、我的环境

1.语言环境:Python 3.8

2.编译器:Pycharm

3.深度学习环境:

  • torch==1.12.1+cu113
  • torchvision==0.13.1+cu113

、导入数据

import torch
import torch.nn as nn
import torchvision
from torchvision import transforms, datasets
import os,PIL,pathlib,warningswarnings.filterwarnings("ignore")             #忽略警告信息
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')      # 加载 AG News 数据集

、构建词典

from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iteratortokenizer  = get_tokenizer('basic_english') # 返回分词器函数def yield_tokens(data_iter):for _, text in data_iter:yield tokenizer(text)vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"]) # 设置默认索引,如果找不到单词,则会选择默认索引
print(vocab(['here', 'is', 'an', 'example']))

结果: [475, 21, 30, 5297]

text_pipeline  = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
print(text_pipeline('here is the an example'))
结果:[475, 21, 2, 30, 5297]
print(label_pipeline('10'))
结果:10

生成数据批次和迭代器

from torch.utils.data import DataLoaderdef collate_batch(batch):label_list, text_list, offsets = [], [], [0]for (_label, _text) in batch:# 标签列表label_list.append(label_pipeline(_label))# 文本列表processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)text_list.append(processed_text)# 偏移量,即语句的总词汇量offsets.append(processed_text.size(0))label_list = torch.tensor(label_list, dtype=torch.int64)text_list  = torch.cat(text_list)offsets    = torch.tensor(offsets[:-1]).cumsum(dim=0) #返回维度dim中输入元素的累计和return label_list.to(device), text_list.to(device), offsets.to(device)# 数据加载器
dataloader = DataLoader(train_iter,batch_size=8,shuffle   =False,collate_fn=collate_batch)

定义模型

from torch import nnclass TextClassificationModel(nn.Module):def __init__(self, vocab_size, embed_dim, num_class):super(TextClassificationModel, self).__init__()self.embedding = nn.EmbeddingBag(vocab_size,   # 词典大小embed_dim,    # 嵌入的维度sparse=False) # self.fc = nn.Linear(embed_dim, num_class)self.init_weights()def init_weights(self):initrange = 0.5self.embedding.weight.data.uniform_(-initrange, initrange)self.fc.weight.data.uniform_(-initrange, initrange)self.fc.bias.data.zero_()def forward(self, text, offsets):embedded = self.embedding(text, offsets)return self.fc(embedded)

定义实例

num_class  = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
em_size     = 64
model      = TextClassificationModel(vocab_size, em_size, num_class).to(device)

定义训练函数与评估函数

import timedef train(dataloader):model.train()  # 切换为训练模式total_acc, train_loss, total_count = 0, 0, 0log_interval = 500start_time   = time.time()for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)optimizer.zero_grad()                    # grad属性归零loss = criterion(predicted_label, label) # 计算网络输出和真实值之间的差距,label为真实值loss.backward()                          # 反向传播optimizer.step()  # 每一步自动更新# 记录acc与losstotal_acc   += (predicted_label.argmax(1) == label).sum().item()train_loss  += loss.item()total_count += label.size(0)if idx % log_interval == 0 and idx > 0:elapsed = time.time() - start_timeprint('| epoch {:1d} | {:4d}/{:4d} batches ''| train_acc {:4.3f} train_loss {:4.5f}'.format(epoch, idx, len(dataloader),total_acc/total_count, train_loss/total_count))total_acc, train_loss, total_count = 0, 0, 0start_time = time.time()def evaluate(dataloader):model.eval()  # 切换为测试模式total_acc, train_loss, total_count = 0, 0, 0with torch.no_grad():for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)loss = criterion(predicted_label, label)  # 计算loss值# 记录测试数据total_acc   += (predicted_label.argmax(1) == label).sum().item()train_loss  += loss.item()total_count += label.size(0)return total_acc/total_count, train_loss/total_count

 结果:

| epoch 1 | 500/1782 batches| train_acc 0.901 train_loss 0.00458
| epoch 1 | 1000/1782 batches| train_acc 0.905 train_loss 0.00438
| epoch 1 | 1500/1782 batches| train_acc 0.908 train_loss 0.00437
---------------------------------------------------------------------
| epoch 1 | time:6.30s |valid_acc 0.907 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 2 | 500/1782 batches| train_acc 0.917 train_loss 0.00381
| epoch 2 | 1000/1782 batches| train_acc 0.917 train_loss 0.00383
| epoch 2 | 1500/1782 batches| train_acc 0.917 train_loss 0.00386
---------------------------------------------------------------------
| epoch 2 | time:6.26s |valid_acc 0.911 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 3 | 500/1782 batches| train_acc 0.929 train_loss 0.00330
| epoch 3 | 1000/1782 batches| train_acc 0.927 train_loss 0.00340
| epoch 3 | 1500/1782 batches| train_acc 0.923 train_loss 0.00354
---------------------------------------------------------------------
| epoch 3 | time:6.21s |valid_acc 0.935 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 4 | 500/1782 batches| train_acc 0.933 train_loss 0.00306
| epoch 4 | 1000/1782 batches| train_acc 0.932 train_loss 0.00311
| epoch 4 | 1500/1782 batches| train_acc 0.929 train_loss 0.00318
---------------------------------------------------------------------
| epoch 4 | time:6.22s |valid_acc 0.916 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 5 | 500/1782 batches| train_acc 0.948 train_loss 0.00253
| epoch 5 | 1000/1782 batches| train_acc 0.949 train_loss 0.00242
| epoch 5 | 1500/1782 batches| train_acc 0.951 train_loss 0.00238
---------------------------------------------------------------------
| epoch 5 | time:6.23s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 6 | 500/1782 batches| train_acc 0.951 train_loss 0.00241
| epoch 6 | 1000/1782 batches| train_acc 0.952 train_loss 0.00236
| epoch 6 | 1500/1782 batches| train_acc 0.952 train_loss 0.00235
---------------------------------------------------------------------
| epoch 6 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 7 | 500/1782 batches| train_acc 0.954 train_loss 0.00228
| epoch 7 | 1000/1782 batches| train_acc 0.951 train_loss 0.00238
| epoch 7 | 1500/1782 batches| train_acc 0.954 train_loss 0.00228
---------------------------------------------------------------------
| epoch 7 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 8 | 500/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 8 | 1000/1782 batches| train_acc 0.955 train_loss 0.00224
| epoch 8 | 1500/1782 batches| train_acc 0.954 train_loss 0.00224
---------------------------------------------------------------------
| epoch 8 | time:6.32s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 9 | 500/1782 batches| train_acc 0.955 train_loss 0.00218
| epoch 9 | 1000/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 9 | 1500/1782 batches| train_acc 0.955 train_loss 0.00227
---------------------------------------------------------------------
| epoch 9 | time:6.24s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 10 | 500/1782 batches| train_acc 0.952 train_loss 0.00229
| epoch 10 | 1000/1782 batches| train_acc 0.955 train_loss 0.00220
| epoch 10 | 1500/1782 batches| train_acc 0.956 train_loss 0.00220
---------------------------------------------------------------------
| epoch 10 | time:6.29s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------

定义训练函数与评估函数

print('Checking the results of test dataset.')
test_acc, test_loss = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(test_acc))
Checking the results of test dataset.
test accuracy    0.905

总结: 

  1. 预训练词向量:使用GloVe、FastText等预训练词向量能显著提升性能

  2. 正则化:合理使用dropout、权重衰减等技术防止过拟合

  3. 超参数调优:学习率、批大小、隐藏层维度等对模型性能影响很大

  4. 迁移学习:对于小数据集,考虑使用BERT等预训练模型进行微调


文章转载自:

http://5rDYljyP.Lsmnn.cn
http://L1X4kSws.Lsmnn.cn
http://zuJLKw7M.Lsmnn.cn
http://7euiCSZm.Lsmnn.cn
http://IE9VwGUZ.Lsmnn.cn
http://fOyq82Eg.Lsmnn.cn
http://voBC5uk0.Lsmnn.cn
http://tsfsOXRQ.Lsmnn.cn
http://5W9ulj6u.Lsmnn.cn
http://S7FebfBY.Lsmnn.cn
http://Ir6TpdrJ.Lsmnn.cn
http://mU63ZsfZ.Lsmnn.cn
http://73dgQh9z.Lsmnn.cn
http://rP0aYGNa.Lsmnn.cn
http://vxU6KghF.Lsmnn.cn
http://CpJRnj30.Lsmnn.cn
http://TZ4nZlhK.Lsmnn.cn
http://gWCbrVoM.Lsmnn.cn
http://YPKk3WLb.Lsmnn.cn
http://n6MM2Rq7.Lsmnn.cn
http://Op2OizeK.Lsmnn.cn
http://GTdQkThH.Lsmnn.cn
http://AqoyuJXx.Lsmnn.cn
http://AczC6rS4.Lsmnn.cn
http://fmsEH4g7.Lsmnn.cn
http://90KJaDuT.Lsmnn.cn
http://eHwVPI1F.Lsmnn.cn
http://lFWKIYec.Lsmnn.cn
http://2pEKcxV1.Lsmnn.cn
http://GvYXpnAG.Lsmnn.cn
http://www.dtcms.com/wzjs/685980.html

相关文章:

  • 四川住房与城乡城乡建设厅网站专业营销型网站建设公司
  • 如何计算网站pv武昌网站建设制作
  • 重庆网站设计系统wordpress 外部链接
  • 鸟人高端网站建设wordpress调整页面布局
  • 成品网站 售卖做网站发违规内容 网警抓不抓
  • 继电器做网站wordpress文章如何备份
  • 现在做网站开发吗在线制作免费生成水印
  • 银川网站建设怎么样微信里我的微站是怎么弄的
  • 有需要做网站的吗贵州省城乡和建设厅网站首页
  • 织梦企业网站模板php网站前后台源代码
  • 广州怎么建设一个网站谷歌推广一年多少钱
  • 云服务器怎么建设网站广州公司网站
  • 响应式企业网站源码网页设计与制作教程题库
  • dede世界杯网站模板企业vi设计报价
  • 报名系统网站开发如何建设网站哪个济南兴田德润简介
  • 关键词查询爱站网使用免费建站
  • 最新新闻热点事件今天镇江网站优化哪家好
  • 自己接私单网站开发免费用搭建网站
  • 网站建设明细表深圳建伟业公司商城
  • 网站怎么做百度口碑wordpress writer
  • 阿里云网站建设网站做的图上传后字变得很模糊
  • 免费网站建设模板德州市建设街小学官方网站
  • HTML5怎么做自适应网站做网站的工具有哪些
  • 吉林市做网站哪家好中国建设工程质量安全管理协会网站
  • 网站开发时如何兼容网站备案了以后
  • 怎么知道网站的空间是谁做的wordpress页面跳转
  • 一起来做网站浅谈京东企业的电子商务网站建设
  • 级a做爰片免费视网站cps推广是什么意思
  • 宝安医院网站建设南京和筑建设有限公司网站
  • 网站推广的方式包括搜索引擎在线观看