Day 36训练
Day 36 训练
- 使用Python和PyTorch构建简单的神经网络:信用违约预测
- 项目概述
- 数据预处理
- 导入所需库
- 数据加载和预处理
- 特征工程
- 准备数据
- 构建神经网络模型
- 定义损失函数和优化器
- 训练模型
- 可视化训练过程
使用Python和PyTorch构建简单的神经网络:信用违约预测
在本文中,我将分享如何使用Python和PyTorch构建一个简单的神经网络模型,用于预测信用违约。这个过程包括数据预处理、特征工程、模型构建、训练和评估等多个步骤。
项目概述
本项目的目标是利用给定的数据集,训练一个神经网络模型,以预测个人是否会发生信用违约。数据集包含多种特征,如年度收入、信用评分、当前贷款金额等,以及目标变量“Credit Default”(信用违约)。
数据预处理
导入所需库
首先,我导入了以下库:
- pandas:用于数据处理和分析
- numpy:用于数值计算
- sklearn:用于数据预处理和模型评估
- matplotlib:用于数据可视化
- torch:用于构建和训练神经网络
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
数据加载和预处理
我加载了数据集,并进行了缺失值的填充。对于数值型特征,我使用中位数填充缺失值;对于分类特征,我使用众数填充。
df = pd.read_csv('d:/python打卡/python60-days-challenge/data.csv')
for col in df.columns:if df[col].isnull().any():if df[col].dtype in ['int64', 'float64']:fill_value = df[col].median()else:fill_value = df[col].mode()[0]df[col].fillna(fill_value, inplace=True)
特征工程
接下来,我进行了特征工程。我将数据集分为数值型特征和分类特征,并分别应用标准化和独热编码。
categorical_features = ['Home Ownership', 'Purpose', 'Term']
numeric_features = ['Annual Income', 'Tax Liens', 'Number of Open Accounts','Years of Credit History', 'Maximum Open Credit','Number of Credit Problems', 'Months since last delinquent','Bankruptcies', 'Current Loan Amount', 'Current Credit Balance','Monthly Debt', 'Credit Score']
preprocessor = ColumnTransformer(transformers=[('num', StandardScaler(), numeric_features),('cat', OneHotEncoder(handle_unknown='ignore'), categorical_features)])
准备数据
我将数据集分为训练集和测试集,并将数据转换为PyTorch张量,以便用于模型训练。
X = df.drop('Credit Default', axis=1)
y = df['Credit Default']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)X_processed = preprocessor.fit_transform(X_train)
X_test_processed = preprocessor.transform(X_test)X_train_tensor = torch.tensor(X_processed, dtype=torch.float32)
y_train_tensor = torch.tensor(y_train.values, dtype=torch.float32).view(-1, 1)
X_test_tensor = torch.tensor(X_test_processed, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).view(-1, 1)train_dataset = TensorDataset(X_train_tensor, y_train_tensor)
test_dataset = TensorDataset(X_test_tensor, y_test_tensor)train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32)
构建神经网络模型
我定义了一个简单的多层感知机(MLP)模型,包含一个输入层、一个隐藏层和一个输出层。
class MLP(nn.Module):def __init__(self, input_dim):super(MLP, self).__init__()self.fc1 = nn.Linear(input_dim, 10)self.relu = nn.ReLU()self.fc2 = nn.Linear(10, 1)def forward(self, x):out = self.fc1(x)out = self.relu(out)out = self.fc2(out)return outinput_dim = X_train_tensor.shape[1]
model = MLP(input_dim).to(device)
定义损失函数和优化器
我选择了二元交叉熵损失函数(BCEWithLogitsLoss)和Adam优化器。
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
训练模型
我训练了模型50个Epoch,并记录了训练和验证的损失和准确率。
train_losses = []
val_losses = []
train_accs = []
val_accs = []for epoch in range(50):model.train()running_loss = 0.0correct = 0total = 0for inputs, labels in train_loader:inputs, labels = inputs.to(device), labels.to(device)optimizer.zero_grad()outputs = model(inputs)loss = criterion(outputs, labels)loss.backward()optimizer.step()running_loss += loss.item()predicted = (outputs > 0.5).float()total += labels.size(0)correct += (predicted == labels).sum().item()train_loss = running_loss / len(train_loader)train_acc = correct / totaltrain_losses.append(train_loss)train_accs.append(train_acc)model.eval()val_loss = 0.0correct = 0total = 0with torch.no_grad():for inputs, labels in test_loader:inputs, labels = inputs.to(device), labels.to(device)outputs = model(inputs)loss = criterion(outputs, labels)val_loss += loss.item()predicted = (outputs > 0.5).float()total += labels.size(0)correct += (predicted == labels).sum().item()val_loss = val_loss / len(test_loader)val_acc = correct / totalval_losses.append(val_loss)val_accs.append(val_acc)print(f'Epoch {epoch+1}: Train Loss: {train_loss:.4f}, Val Loss: {val_loss:.4f}, 'f'Train Acc: {train_acc:.4f}, Val Acc: {val_acc:.4f}')
可视化训练过程
最后,我绘制了训练和验证的准确率和损失曲线,以评估模型的性能。
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(train_accs, label='Train Accuracy')
plt.plot(val_accs, label='Val Accuracy')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(train_losses, label='Train Loss')
plt.plot(val_losses, label='Val Loss')
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend()plt.tight_layout()
plt.show()
浙大疏锦行