打卡第34天:MLP神经网络训练
jiji1.PyTorch和cuda的安装
2.查看显卡信息的命令行命令(cmd中使用)
3.cuda的检查
4.简单神经网络的流程
a.数据预处理(归一化、转换成张量)
b.模型的定义
i.继承nn.Module类
ii.定义每一个层
iii.定义前向传播流程
c.定义损失函数和优化器
d.定义训练流程
e.可视化loss过程
今日作业:能够手敲代码
首先是数据准备过程:加载数据集并划分,打印尺寸,归一化处理,将数据转化为张量
from sklearn.datasets import load_iris
from sklearn.model_selection import train_split_test
import numpy
iris = load_iris
X = iris.data
y = iris.target
X_train, y_train, X_test, y_test = train_split_test(X, y, test_size = 0.8, random_state =42)
print(X_test.shap)
print(y_test.shap)
print(X_train.shap)
print(y_train.shap)
from sklearn.preprocesing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)X_train = torch.Float.Tensor(X_train)
y_train = torch.Long.Tensor(y_train)
X_test = torch.Float.Tensor(X_test)
y_test = torch.Long.Tensor(y_test)
第二步进行模型架构:模型传递逻辑以及实例化
import torch
import torch.nn as nn
import torch.optim as optim
class MLP(nn.model, self):def __init__(self):super(MLP, self)__init__():self.fc1 = nn.Linear(4, 10)self.relu = RELU()self.fc2 = Linear(10, 3)def forward(self, x)out = self.fc1(x)out = self.relu(out)out = self.fc2(out)return out
model = MLP()
第三步进行模型训练:规定训练轮数,记录损失值,向前传播,反向传播优化并记录损失值 ,打印结果
criterion = nn.CrossEntropyloss()
optimizer = optim.SGD(model.parameters(),lr = 0.01)
num_epoch = 20000
loss =[]
for num_epoch in range(num_epoch):outputs = model forward(X_train)loss = criterion(outputs, y_train)optimizer.zero_grad()loss.backward()optimizer.step()loss.append(loss.item())if (epoch + 1) % 100 ==0:print(f'Epoch[{epoch +1} / {num_epoch}], Loss{loss.item():.4f}')
最后可以进行数据可视化
import matplotlib.pyplot as plt
# 可视化损失曲线
plt.plot(range(num_epochs), losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss over Epochs')
plt.show()
@浙大疏锦行