当前位置: 首页 > wzjs >正文

网站别人给我做的备案 我能更改吗谷歌网站 百度

网站别人给我做的备案 我能更改吗,谷歌网站 百度,网站使用标题做路径,温州免费网站建站模板🍨 本文为🔗365天深度学习训练营 中的学习记录博客🍖 原作者:K同学啊 一、实现思路 数据预处理: 读取和可视化数据,对目标特征进行归一化处理。构建输入序列和目标序列,将前8个时间段的数据作为…
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

一、实现思路

  1. 数据预处理

    • 读取和可视化数据,对目标特征进行归一化处理。
    • 构建输入序列和目标序列,将前8个时间段的数据作为输入(X),第9个时间段的温度数据作为目标(y)。
  2. 数据集划分

    • 将数据分为训练集和测试集,并将它们转换为 PyTorch 张量。
    • 使用 DataLoader 将数据集加载为批次,以便在训练过程中使用。
  3. 模型构建

    • 定义了一个包含两个 LSTM 层和一个全连接层的神经网络模型。
    • 第一个 LSTM 层的输入大小为3(对应三个特征),隐藏层大小为320;第二个 LSTM 层的输入大小为320,隐藏层大小也为320;全连接层将隐藏层的输出映射到1个输出。
  4. 训练和测试

    • 定义了一个训练函数,使用均方误差(MSE)作为损失函数,随机梯度下降(SGD)作为优化器,并使用余弦退火学习率调度器。
    • 定义了一个测试函数,用于在测试集上评估模型的性能。
    • 在多个训练周期(epochs)中训练模型,记录训练和测试的损失,并在每个周期结束时调整学习率。
  5. 结果评估

    • 绘制训练和验证损失曲线,以评估模型的训练效果。
    • 使用模型对测试集进行预测,并将预测结果和真实值反归一化,以便进行比较。
    • 绘制真实值和预测值的对比图,计算并输出均方根误差(RMSE)和 R² 分数,以评估模型的预测性能。
    • MSE 是“均方误差”,用于量化预测值与真实值之间的误差。重点是它是计算每个预测值和真实值的差值的平方,然后取平均值,这样可以放大误差的影响,尤其是当模型在某些点上预测得特别糟糕时,MSE 会显得很有意义。
    • R²,它是“决定系数”,通常用来衡量模型对数据变异的解释能力。它有一个关键点,就是它是一个相对指标,范围在 0 到 1 之间,表示模型解释了多少数据的变异,剩下的部分则由随机误差或其他因素决定。R² 的好处在于它能够直观地告诉我们模型的好坏:越接近 1,说明模型越好。

二、前期工作

1. 导入库

import torch.nn.functional as F
import numpy  as np
import pandas as pd
import torch
from torch import nn
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from torch.utils.data import TensorDataset, DataLoader
import copy
import matplotlib.pyplot as plt
from sklearn import metricsfrom datetime import datetime

2. 导入数据并可视化

data = pd.read_csv("./data/woodpine2.csv")
plt.rcParams['savefig.dpi'] = 500 #图片像素
plt.rcParams['figure.dpi'] = 500 #分辨率
fig, ax =plt.subplots(1,3,constrained_layout=True, figsize=(14, 3))
sns.lineplot(data=data["Tem1"], ax=ax[0])
sns.lineplot(data=data["CO 1"], ax=ax[1])
sns.lineplot(data=data["Soot 1"], ax=ax[2])
plt.show()

在这里插入图片描述

3. 数据预处理

dataFrame = data.iloc[:,1:]dataFrame = data.iloc[:,1:].copy()
sc  = MinMaxScaler(feature_range=(0, 1)) #将数据归一化,范围是0到1
for i in ['CO 1', 'Soot 1', 'Tem1']:dataFrame[i] = sc.fit_transform(dataFrame[i].values.reshape(-1, 1))width_X = 8
width_y = 1
##取前8个时间段的Tem1、CO 1、Soot 1为X,第9个时间段的Tem1为y。
X = []
y = []
in_start = 0
for _, _ in data.iterrows():in_end = in_start + width_Xout_end = in_end + width_yif out_end < len(dataFrame):X_ = np.array(dataFrame.iloc[in_start:in_end , ])y_ = np.array(dataFrame.iloc[in_end :out_end, 0])X.append(X_)y.append(y_)in_start += 1

4. 划分数据集

X = np.array(X)
y = np.array(y).reshape(-1,1,1)
X_train = torch.tensor(np.array(X[:5000]), dtype=torch.float32)
y_train = torch.tensor(np.array(y[:5000]), dtype=torch.float32)
X_test = torch.tensor(np.array(X[5000:]), dtype=torch.float32)
y_test = torch.tensor(np.array(y[5000:]), dtype=torch.float32)train_dl = DataLoader(TensorDataset(X_train, y_train),batch_size=64,shuffle=False)test_dl = DataLoader(TensorDataset(X_test, y_test),batch_size=64,shuffle=False)

三、模型训练

1. 构建模型

class model_lstm(nn.Module):def __init__(self):super(model_lstm, self).__init__()self.lstm0 = nn.LSTM(input_size=3 ,hidden_size=320,num_layers=1, batch_first=True)self.lstm1 = nn.LSTM(input_size=320 ,hidden_size=320,num_layers=1, batch_first=True)self.fc0 = nn.Linear(320, 1)def forward(self, x):out, hidden1 = self.lstm0(x)out, _ = self.lstm1(out, hidden1)out = self.fc0(out)return out[:, -1:, :] #取2个预测值,否则经过lstm会得到8*2个预测model = model_lstm()
print(model)
model_lstm((lstm0): LSTM(3, 320, batch_first=True)(lstm1): LSTM(320, 320, batch_first=True)(fc0): Linear(in_features=320, out_features=1, bias=True)
)

2.训练函数

def train(train_dl, model, loss_fn, opt, lr_scheduler=None):size = len(train_dl.dataset)num_batches = len(train_dl)train_loss = 0 # 初始化训练损失和正确率for x, y in train_dl:x, y = x.to(device), y.to(device)# 计算预测误差pred = model(x) # 网络输出loss = loss_fn(pred, y) # 计算网络输出和真实值之间的差距# 反向传播opt.zero_grad() # grad属性归零loss.backward() # 反向传播opt.step() # 每一步自动更新# 记录losstrain_loss += loss.item()if lr_scheduler is not None:lr_scheduler.step()print("learning rate = {:.5f}".format(opt.param_groups[0]['lr']), end=" ")train_loss /= num_batchesreturn train_loss

3. 测试函数

def test (dataloader, model, loss_fn):size = len(dataloader.dataset) # 测试集的大小num_batches = len(dataloader) # 批次数目test_loss = 0# 当不进行训练时,停止梯度更新,节省计算内存消耗with torch.no_grad():for x, y in dataloader:x, y = x.to(device), y.to(device)# 计算lossy_pred = model(x)loss = loss_fn(y_pred, y)test_loss += loss.item()test_loss /= num_batchesreturn test_loss

4.模型训练

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")model = model_lstm()model = model.to(device)loss_fn = nn.MSELoss() # 创建损失函数learn_rate = 1e-1 # 学习率opt = torch.optim.SGD(model.parameters(),lr=learn_rate,weight_decay=1e-4)epochs = 50train_loss = []test_loss = []lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(opt,epochs, last_epoch=-1)for epoch in range(epochs):model.train()epoch_train_loss = train(train_dl, model, loss_fn, opt, lr_scheduler)model.eval()epoch_test_loss = test(test_dl, model, loss_fn)train_loss.append(epoch_train_loss)test_loss.append(epoch_test_loss)template = ('Epoch:{:2d}, Train_loss:{:.5f}, Test_loss:{:.5f}')print(template.format(epoch+1, epoch_train_loss, epoch_test_loss))print("="*20, 'Done', "="*20)
learning rate = 0.09990 Epoch: 1, Train_loss:0.00120, Test_loss:0.00318
learning rate = 0.09961 Epoch: 2, Train_loss:0.01379, Test_loss:0.00306
learning rate = 0.09911 Epoch: 3, Train_loss:0.01342, Test_loss:0.00294
learning rate = 0.09843 Epoch: 4, Train_loss:0.01302, Test_loss:0.00281
learning rate = 0.09755 Epoch: 5, Train_loss:0.01256, Test_loss:0.00267
learning rate = 0.09649 Epoch: 6, Train_loss:0.01202, Test_loss:0.00252
learning rate = 0.09524 Epoch: 7, Train_loss:0.01141, Test_loss:0.00235
learning rate = 0.09382 Epoch: 8, Train_loss:0.01070, Test_loss:0.00217
learning rate = 0.09222 Epoch: 9, Train_loss:0.00989, Test_loss:0.00197
learning rate = 0.09045 Epoch:10, Train_loss:0.00898, Test_loss:0.00177
learning rate = 0.08853 Epoch:11, Train_loss:0.00800, Test_loss:0.00155
learning rate = 0.08645 Epoch:12, Train_loss:0.00697, Test_loss:0.00133
learning rate = 0.08423 Epoch:13, Train_loss:0.00592, Test_loss:0.00112
learning rate = 0.08187 Epoch:14, Train_loss:0.00489, Test_loss:0.00092
learning rate = 0.07939 Epoch:15, Train_loss:0.00394, Test_loss:0.00074
learning rate = 0.07679 Epoch:16, Train_loss:0.00309, Test_loss:0.00058
learning rate = 0.07409 Epoch:17, Train_loss:0.00236, Test_loss:0.00045
learning rate = 0.07129 Epoch:18, Train_loss:0.00177, Test_loss:0.00034
learning rate = 0.06841 Epoch:19, Train_loss:0.00131, Test_loss:0.00026
learning rate = 0.06545 Epoch:20, Train_loss:0.00096, Test_loss:0.00020
learning rate = 0.06243 Epoch:21, Train_loss:0.00070, Test_loss:0.00015
learning rate = 0.05937 Epoch:22, Train_loss:0.00052, Test_loss:0.00012
learning rate = 0.05627 Epoch:23, Train_loss:0.00039, Test_loss:0.00009
learning rate = 0.05314 Epoch:24, Train_loss:0.00030, Test_loss:0.00007
learning rate = 0.05000 Epoch:25, Train_loss:0.00024, Test_loss:0.00006
learning rate = 0.04686 Epoch:26, Train_loss:0.00020, Test_loss:0.00005
learning rate = 0.04373 Epoch:27, Train_loss:0.00017, Test_loss:0.00004
learning rate = 0.04063 Epoch:28, Train_loss:0.00015, Test_loss:0.00004
learning rate = 0.03757 Epoch:29, Train_loss:0.00013, Test_loss:0.00003
learning rate = 0.03455 Epoch:30, Train_loss:0.00013, Test_loss:0.00003
learning rate = 0.03159 Epoch:31, Train_loss:0.00012, Test_loss:0.00003
learning rate = 0.02871 Epoch:32, Train_loss:0.00012, Test_loss:0.00002
learning rate = 0.02591 Epoch:33, Train_loss:0.00011, Test_loss:0.00002
learning rate = 0.02321 Epoch:34, Train_loss:0.00011, Test_loss:0.00002
learning rate = 0.02061 Epoch:35, Train_loss:0.00011, Test_loss:0.00002
learning rate = 0.01813 Epoch:36, Train_loss:0.00012, Test_loss:0.00002
learning rate = 0.01577 Epoch:37, Train_loss:0.00012, Test_loss:0.00002
learning rate = 0.01355 Epoch:38, Train_loss:0.00012, Test_loss:0.00002
learning rate = 0.01147 Epoch:39, Train_loss:0.00013, Test_loss:0.00002
learning rate = 0.00955 Epoch:40, Train_loss:0.00013, Test_loss:0.00002
learning rate = 0.00778 Epoch:41, Train_loss:0.00013, Test_loss:0.00002
learning rate = 0.00618 Epoch:42, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00476 Epoch:43, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00351 Epoch:44, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00245 Epoch:45, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00157 Epoch:46, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00089 Epoch:47, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00039 Epoch:48, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00010 Epoch:49, Train_loss:0.00014, Test_loss:0.00003
learning rate = 0.00000 Epoch:50, Train_loss:0.00014, Test_loss:0.00003

四、模型评估

1.损失函数图

current_time = datetime.now() # 获取当前时间plt.figure(figsize=(5, 3),dpi=120)plt.plot(train_loss , label='LSTM Training Loss')plt.plot(test_loss, label='LSTM Validation Loss')plt.title('Training and Validation Loss')plt.xlabel(current_time) # 打卡请带上时间戳,否则代码截图无效plt.legend()plt.show()

在这里插入图片描述

2. 调用函数进行预测

predicted_y_lstm = sc.inverse_transform(model(X_test).detach().numpy().reshape(-1,1)) # 测试集输入模型进行预测
y_test_1 = sc.inverse_transform(y_test.reshape(-1,1))
y_test_one = [i[0] for i in y_test_1]
predicted_y_lstm_one = [i[0] for i in predicted_y_lstm]
plt.figure(figsize=(5, 3),dpi=120)
# 画出真实数据和预测数据的对比曲线
plt.plot(y_test_one[:2000], color='red', label='real_temp')
plt.plot(predicted_y_lstm_one[:2000], color='blue', label='prediction')
plt.title('Title')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()

在这里插入图片描述

3. 均方根误差

RMSE_lstm = metrics.mean_squared_error(predicted_y_lstm_one, y_test_1) ** 0.5
R2_lstm = metrics.r2_score(predicted_y_lstm_one, y_test_1)print('均方根误差: %.5f' % RMSE_lstm)
print('R2: %.5f' % R2_lstm)
均方根误差: 7.19289
R2: 0.81598

文章转载自:

http://RGZOaCts.dmwck.cn
http://JreqfQVF.dmwck.cn
http://Y6thX7jx.dmwck.cn
http://BHcgdhVB.dmwck.cn
http://nJ2jHbbL.dmwck.cn
http://eSoCsh21.dmwck.cn
http://GYBnSokk.dmwck.cn
http://BUOuYgSG.dmwck.cn
http://6ytdpoEz.dmwck.cn
http://isB4mAgk.dmwck.cn
http://UH09wpYW.dmwck.cn
http://7406WAo6.dmwck.cn
http://IDGBPDR2.dmwck.cn
http://8LUvU96U.dmwck.cn
http://OfLHu563.dmwck.cn
http://tJwGcYMJ.dmwck.cn
http://kJ8cBSdC.dmwck.cn
http://5n4I28Bz.dmwck.cn
http://W6GMcfCa.dmwck.cn
http://yhZstcxm.dmwck.cn
http://3K9aJuja.dmwck.cn
http://OIJyLzO4.dmwck.cn
http://CNIF0WsE.dmwck.cn
http://jqvKV1cD.dmwck.cn
http://VSG30em2.dmwck.cn
http://85upQ8yL.dmwck.cn
http://4i4shmRF.dmwck.cn
http://MuqufZSW.dmwck.cn
http://rISJEK7V.dmwck.cn
http://tOiUWnMe.dmwck.cn
http://www.dtcms.com/wzjs/701077.html

相关文章:

  • 建设小的电商网站开源系统开发网站建设的问卷调查
  • 营销网站的功能构成哪里有免费网站空间申请
  • 河北建设厅八大员报名网站有什么方法在淘宝发布网站建设设计
  • 微网站一键导航企业网站首页设计评价
  • 北京电商网站建设哪家好国外网站流量
  • 做网站大概多钱做网站套餐
  • 西安网站建设有那些公司做学生阅读分析的网站
  • 网站 目标怎么在搜索引擎里做网站网页
  • 建站公司费用情况酒店vi设计
  • 青岛黄岛网站建设网站如何进行seo
  • 网站建设教程金旭亮北京科技公司10强
  • 中国建设工程造价管理网站空间设计说明怎么写
  • 长沙蒲公英网站建设如何快速网络推广产品的方法
  • 网站开发的调研内容wordpress 关键词优化
  • 网站建设系统服务机构编程加盟
  • 沈阳沈河seo网站排名优化商业网站模板下载
  • 做网站公司需要准备资料淘宝搜索词排名查询
  • 做网站需要投标吗免费建各种网站
  • 做p2p网站多少钱网站框架设计
  • 公司网站是不是每天要更新福建微网站建设
  • 杭州做网站公司有哪些要给公司做一个网站怎么做的
  • 营销型建设网站公司无极县在线招聘信息
  • 搞好姓氏源流网站建设forum wordpress
  • 北京网站开发多少钱移动互联网应用技术专业学什么
  • 南海营销网站开发民营医院建设网站
  • 微网站趋势wordpress能做图片站
  • 静态网站怎么做有效页怎么做网站优化排名
  • 网站正在建设中 html 模板今天汽油价格
  • 从零做网站茂名建站模板搭建
  • 怎么做刷业务网站重庆公司名字查重系统