基于Python Keras 实践大全
我们的代码示例简短(少于 300 行代码),重点介绍了垂直深度学习工作流程。
我们所有的示例都编写为 Jupyter 笔记本,可以在 Google Colab 中一键运行,这是一个托管的笔记本环境,无需设置并在云中运行。Google Colab 包括 GPU 和 TPU 运行时。
图像分类
从头开始的图像分类
使用Keras进行图像分类的步骤
安装必要的库,确保已安装TensorFlow和Keras。
pip install tensorflow keras numpy matplotlib
import os
import numpy as np
import keras
from keras import layers
from tensorflow import data as tf_data
import matplotlib.pyplot as plt
加载并预处理数据集,以CIFAR-10为例。
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
对标签进行独热编码。
from keras.utils import to_categorical
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
构建卷积神经网络模型。
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Densemodel = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
编译模型,指定优化器和损失函数。
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
训练模型并评估性能。
history = model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
test_loss, test_acc = model.evaluate(x_test, y_test)
print(f'Test accuracy: {test_acc}')
可视化训练过程。
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
数据增强提升模型泛化能力
使用ImageDataGenerator进行数据增强。
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(rotation_range=20,width_shift_range=0.2,height_shift_range=0.2,horizontal_flip=True)
datagen.fit(x_train)
使用增强后的数据训练模型。
model.fit(datagen.flow(x_train, y_train, batch_size=32),epochs=10, validation_data=(x_test, y_test))
使用预训练模型进行迁移学习
加载VGG16预训练模型。
from keras.applications import VGG16
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
冻结预训练层并添加自定义分类层。
for layer in base_model.layers:layer.trainable = Falsemodel = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(10, activation='softmax'))
编译并训练迁移学习模型。
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
模型保存与加载
保存训练好的模型。
model.save('image_classifier.h5')
加载保存的模型进行预测。
from keras.models import load_model
loaded_model = load_model('image_classifier.h5')
predictions = loaded_model.predict(x_test)
基于Python和Keras
以下是基于Python和Keras的机器学习与深度学习的实用示例的精选分类及部分代码示例。这些示例涵盖基础模型、计算机视觉、自然语言处理等领域,适合不同阶段的学习者参考。
基础神经网络示例
使用Keras构建全连接网络进行二分类任务:
from keras.models import Sequential
from keras.layers import Densemodel = Sequential()
model.add(Dense(16, activation='relu', input_shape=(20,)))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=32)
手写数字识别(MNIST)
import torch
import torch.nn as nnclass Net(nn.Module):def __init__(self):super(Net, self).__init__()self.fc1 = nn.Linear(784, 128)self.fc2 = nn.Linear(128, 10)def forward(self, x):x = torch.relu(self.fc1(x))x = self.fc2(x)return x
卷积神经网络(CNN)示例
MNIST手写数字识别实现:
from keras.layers import Conv2D, MaxPooling2D, Flattenmodel = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)))
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
循环神经网络(RNN)示例
LSTM文本生成模型:
from keras.layers import LSTM, Embeddingmodel = Sequential()
model.add(Embedding(vocab_size, 50, input_length=max_len))
model.add(LSTM(100, return_sequences=True))
model.add(Dense(vocab_size, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
循环神经网络(RNN)文本分类
from tensorflow.keras.layers import Embedding, SimpleRNNmodel = Sequential([Embedding(10000, 32),SimpleRNN(32),Dense(1, activation='sigmoid')
])
图神经网络(GNN)
import torch_geometric.nn as geom_nnclass GCN(torch.nn.Module):def __init__(self):super(GCN, self).__init__()self.conv1 = geom_nn.GCNConv(3, 16)self.conv2 = geom_nn.GCNConv(16, 2)
自编码器(Autoencoder)
from tensorflow.keras.layers import Input, Denseinput_img = Input(shape=(784,))
encoded = Dense(32, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)autoencoder = tf.keras.Model(input_img, decoded)
迁移学习示例
使用预训练的VGG16模型:
from keras.applications.vgg16 import VGG16base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))
for layer in base_model.layers:layer.trainable = False
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(10, activation='softmax'))
生成对抗网络(GAN)示例
简单的GAN实现:
generator = Sequential([Dense(128, input_dim=100, activation='relu'),Dense(784, activation='tanh')
])
discriminator = Sequential([Dense(128, input_dim=784, activation='relu'),Dense(1, activation='sigmoid')
])
gan = Sequential([generator, discriminator])
强化学习示例
深度Q网络(DQN)框架:
from keras.layers import Input
from keras.models import Modelstates = Input(shape=(state_size,))
actions = Dense(64, activation='relu')(states)
q_values = Dense(action_size, activation='linear')(actions)
model = Model(inputs=states, outputs=q_values)
超参数调优示例
使用Keras Tuner进行超参数搜索:
import kerastuner as ktdef build_model(hp):model = Sequential()model.add(Dense(units=hp.Int('units', min_value=32, max_value=512, step=32),activation='relu'))model.add(Dense(10, activation='softmax'))model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')return modeltuner = kt.RandomSearch(build_model, objective='val_accuracy', max_trials=5)
tuner.search(X_train, y_train, epochs=5, validation_data=(X_val, y_val))
时间序列预测示例(LSTM/GRU)
使用Conv1D进行序列预测:
from keras.layers import Conv1Dmodel = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(None, 1)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
注意力机制(Seq2Seq)
from tensorflow.keras.layers import LSTM, Attentionencoder = LSTM(64, return_sequences=True)
decoder = LSTM(64, return_sequences=True)
attention = Attention()
神经风格迁移
import tensorflow as tf
from tensorflow.keras.applications import vgg19
from tensorflow.keras.preprocessing.image import load_img, img_to_array
import numpy as np# 加载并预处理图像
def preprocess_image(image_path, target_size=(512, 512)):img = load_img(image_path, target_size=target_size)img = img_to_array(img)img = np.expand_dims(img, axis=0)img = vgg19.preprocess_input(img)return tf.constant(img)# 定义内容损失和风格损失
def content_loss(base_content, target):return tf.reduce_mean(tf.square(base_content - target))def gram_matrix(input_tensor):result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)input_shape = tf.shape(input_tensor)return result / (tf.cast(input_shape[1]*input_shape[2], tf.float32))def style_loss(base_style, gram_target):return tf.reduce_mean(tf.square(gram_matrix(base_style) - gram_target))# 总损失函数
def total_loss(content_outputs, style_outputs, content_target, style_targets, style_weight=1e-2, content_weight=1e4):content_loss_value = content_loss(content_outputs, content_target)style_loss_value = 0for style_output, style_target in zip(style_outputs, style_targets):style_loss_value += style_loss(style_output, style_target)return content_weight * content_loss_value + style_weight * style_loss_value
完整示例通常包含以下扩展领域:
- 自编码器与降维技术
- 注意力机制实现
- 多任务学习框架
- 模型解释性工具(如SHAP)
- 部署相关的模型量化与优化
- 异常检测算法
- 多模态学习模型
建议通过Keras官方文档、GitHub开源项目以及Kaggle竞赛案例获取更多具体实现。每个示例应包含数据预处理、模型构建、训练循环和评估指标四个核心部分,实践时需根据具体数据集调整网络结构和超参数。
自编码器与降维技术实例(Python实现)
以下整理自编码器与降维技术的应用场景及核心代码片段,涵盖基础实现、可视化、优化技巧等。由于篇幅限制,列举部分典型示例,其余可通过扩展数据或调整参数实现。
基础自编码器实现(MNIST数据集)
from keras.layers import Input, Dense
from keras.models import Modelinput_img = Input(shape=(784,))
encoded = Dense(128, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
卷积自编码器(图像去噪)
from keras.layers import Conv2D, MaxPooling2D, UpSampling2Dinput_img = Input(shape=(28, 28, 1))
x = Conv2D(16, (3,3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2,2), padding='same')(x)
x = Conv2D(8, (3,3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2,2), padding='same')(x)x = Conv2D(8, (3,3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2,2))(x)
x = Conv2D(16, (3,3), activation='relu', padding='same')(x)
x = UpSampling2D((2,2))(x)
decoded = Conv2D(1, (3,3), activation='sigmoid', padding='same')(x)