当前位置: 首页 > wzjs >正文

网站开发国内外现状营销网站大全

网站开发国内外现状,营销网站大全,有哪些做政府网站的相关公司,一学一做看视频网站有哪些🍨 本文为🔗365天深度学习训练营 中的学习记录博客🍖 原作者:K同学啊 | 接辅导、项目定制 一、我的环境 1.语言环境:Python 3.8 2.编译器:Pycharm 3.深度学习环境: torch1.12.1cu113torchvision…
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

一、我的环境

1.语言环境:Python 3.8

2.编译器:Pycharm

3.深度学习环境:

  • torch==1.12.1+cu113
  • torchvision==0.13.1+cu113

、导入数据

import torch
import torch.nn as nn
import torchvision
from torchvision import transforms, datasets
import os,PIL,pathlib,warningswarnings.filterwarnings("ignore")             #忽略警告信息
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')      # 加载 AG News 数据集

、构建词典

from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iteratortokenizer  = get_tokenizer('basic_english') # 返回分词器函数def yield_tokens(data_iter):for _, text in data_iter:yield tokenizer(text)vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"]) # 设置默认索引,如果找不到单词,则会选择默认索引
print(vocab(['here', 'is', 'an', 'example']))

结果: [475, 21, 30, 5297]

text_pipeline  = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
print(text_pipeline('here is the an example'))
结果:[475, 21, 2, 30, 5297]
print(label_pipeline('10'))
结果:10

生成数据批次和迭代器

from torch.utils.data import DataLoaderdef collate_batch(batch):label_list, text_list, offsets = [], [], [0]for (_label, _text) in batch:# 标签列表label_list.append(label_pipeline(_label))# 文本列表processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)text_list.append(processed_text)# 偏移量,即语句的总词汇量offsets.append(processed_text.size(0))label_list = torch.tensor(label_list, dtype=torch.int64)text_list  = torch.cat(text_list)offsets    = torch.tensor(offsets[:-1]).cumsum(dim=0) #返回维度dim中输入元素的累计和return label_list.to(device), text_list.to(device), offsets.to(device)# 数据加载器
dataloader = DataLoader(train_iter,batch_size=8,shuffle   =False,collate_fn=collate_batch)

定义模型

from torch import nnclass TextClassificationModel(nn.Module):def __init__(self, vocab_size, embed_dim, num_class):super(TextClassificationModel, self).__init__()self.embedding = nn.EmbeddingBag(vocab_size,   # 词典大小embed_dim,    # 嵌入的维度sparse=False) # self.fc = nn.Linear(embed_dim, num_class)self.init_weights()def init_weights(self):initrange = 0.5self.embedding.weight.data.uniform_(-initrange, initrange)self.fc.weight.data.uniform_(-initrange, initrange)self.fc.bias.data.zero_()def forward(self, text, offsets):embedded = self.embedding(text, offsets)return self.fc(embedded)

定义实例

num_class  = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
em_size     = 64
model      = TextClassificationModel(vocab_size, em_size, num_class).to(device)

定义训练函数与评估函数

import timedef train(dataloader):model.train()  # 切换为训练模式total_acc, train_loss, total_count = 0, 0, 0log_interval = 500start_time   = time.time()for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)optimizer.zero_grad()                    # grad属性归零loss = criterion(predicted_label, label) # 计算网络输出和真实值之间的差距,label为真实值loss.backward()                          # 反向传播optimizer.step()  # 每一步自动更新# 记录acc与losstotal_acc   += (predicted_label.argmax(1) == label).sum().item()train_loss  += loss.item()total_count += label.size(0)if idx % log_interval == 0 and idx > 0:elapsed = time.time() - start_timeprint('| epoch {:1d} | {:4d}/{:4d} batches ''| train_acc {:4.3f} train_loss {:4.5f}'.format(epoch, idx, len(dataloader),total_acc/total_count, train_loss/total_count))total_acc, train_loss, total_count = 0, 0, 0start_time = time.time()def evaluate(dataloader):model.eval()  # 切换为测试模式total_acc, train_loss, total_count = 0, 0, 0with torch.no_grad():for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)loss = criterion(predicted_label, label)  # 计算loss值# 记录测试数据total_acc   += (predicted_label.argmax(1) == label).sum().item()train_loss  += loss.item()total_count += label.size(0)return total_acc/total_count, train_loss/total_count

 结果:

| epoch 1 | 500/1782 batches| train_acc 0.901 train_loss 0.00458
| epoch 1 | 1000/1782 batches| train_acc 0.905 train_loss 0.00438
| epoch 1 | 1500/1782 batches| train_acc 0.908 train_loss 0.00437
---------------------------------------------------------------------
| epoch 1 | time:6.30s |valid_acc 0.907 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 2 | 500/1782 batches| train_acc 0.917 train_loss 0.00381
| epoch 2 | 1000/1782 batches| train_acc 0.917 train_loss 0.00383
| epoch 2 | 1500/1782 batches| train_acc 0.917 train_loss 0.00386
---------------------------------------------------------------------
| epoch 2 | time:6.26s |valid_acc 0.911 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 3 | 500/1782 batches| train_acc 0.929 train_loss 0.00330
| epoch 3 | 1000/1782 batches| train_acc 0.927 train_loss 0.00340
| epoch 3 | 1500/1782 batches| train_acc 0.923 train_loss 0.00354
---------------------------------------------------------------------
| epoch 3 | time:6.21s |valid_acc 0.935 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 4 | 500/1782 batches| train_acc 0.933 train_loss 0.00306
| epoch 4 | 1000/1782 batches| train_acc 0.932 train_loss 0.00311
| epoch 4 | 1500/1782 batches| train_acc 0.929 train_loss 0.00318
---------------------------------------------------------------------
| epoch 4 | time:6.22s |valid_acc 0.916 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 5 | 500/1782 batches| train_acc 0.948 train_loss 0.00253
| epoch 5 | 1000/1782 batches| train_acc 0.949 train_loss 0.00242
| epoch 5 | 1500/1782 batches| train_acc 0.951 train_loss 0.00238
---------------------------------------------------------------------
| epoch 5 | time:6.23s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 6 | 500/1782 batches| train_acc 0.951 train_loss 0.00241
| epoch 6 | 1000/1782 batches| train_acc 0.952 train_loss 0.00236
| epoch 6 | 1500/1782 batches| train_acc 0.952 train_loss 0.00235
---------------------------------------------------------------------
| epoch 6 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 7 | 500/1782 batches| train_acc 0.954 train_loss 0.00228
| epoch 7 | 1000/1782 batches| train_acc 0.951 train_loss 0.00238
| epoch 7 | 1500/1782 batches| train_acc 0.954 train_loss 0.00228
---------------------------------------------------------------------
| epoch 7 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 8 | 500/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 8 | 1000/1782 batches| train_acc 0.955 train_loss 0.00224
| epoch 8 | 1500/1782 batches| train_acc 0.954 train_loss 0.00224
---------------------------------------------------------------------
| epoch 8 | time:6.32s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 9 | 500/1782 batches| train_acc 0.955 train_loss 0.00218
| epoch 9 | 1000/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 9 | 1500/1782 batches| train_acc 0.955 train_loss 0.00227
---------------------------------------------------------------------
| epoch 9 | time:6.24s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 10 | 500/1782 batches| train_acc 0.952 train_loss 0.00229
| epoch 10 | 1000/1782 batches| train_acc 0.955 train_loss 0.00220
| epoch 10 | 1500/1782 batches| train_acc 0.956 train_loss 0.00220
---------------------------------------------------------------------
| epoch 10 | time:6.29s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------

定义训练函数与评估函数

print('Checking the results of test dataset.')
test_acc, test_loss = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(test_acc))
Checking the results of test dataset.
test accuracy    0.905

总结: 

  1. 预训练词向量:使用GloVe、FastText等预训练词向量能显著提升性能

  2. 正则化:合理使用dropout、权重衰减等技术防止过拟合

  3. 超参数调优:学习率、批大小、隐藏层维度等对模型性能影响很大

  4. 迁移学习:对于小数据集,考虑使用BERT等预训练模型进行微调

http://www.dtcms.com/wzjs/353370.html

相关文章:

  • 上海做外贸网站湖南企业seo优化推荐
  • 网站建设用什么书全国各城市感染高峰进度查询
  • 化妆品 营销型网站百度站长工具seo
  • wordpress支持哪一版本php保定seo排名
  • 昆明网站建设服务aso优化排名推广
  • 工厂网站开发百度高级搜索引擎
  • 杭州萧山做网站东莞网站seo优化托管
  • 怎么自己做三个一网站企业推广
  • 哪家公司做网站比较好成都网站建设技术支持
  • 江苏网站建设南通2021最近最火的关键词
  • 网站做邮箱seo搜索引擎优化实训报告
  • 南京平台网站建设整站优化cms
  • 重庆门户网站超级外链工具有用吗
  • 济南品牌网站建设介绍北京、广州最新发布
  • 凤岗本地网站日照seo公司
  • 上海门户网站制淘宝代运营公司
  • dw中网站建设的基本流程免费友情链接网站
  • 罗湖网站建设的公司哪家好百度网站名称和网址
  • 学做的网站基础蛋糕自建网站平台
  • 如何用easyui做网站seo评测论坛
  • wordpress外贸网站好用的模板西安今日头条新闻
  • 长沙市宁乡县建设局网站优化网站首页
  • 视频制作培训机构推荐优化教程网站推广排名
  • 展示网站全网网站推广
  • 教育培训网站建设正规app推广
  • 平顶山做网站公司在线外链工具
  • 杭州网站设计予尚搜索引擎优化的定义是什么
  • 网站设计宣传广告方案新闻头条最新消息30字
  • 客服外包公司怎么创立网站排名优化培训
  • 深圳城乡和住房建设局网站首页百度的seo关键词优化怎么弄