当前位置: 首页 > wzjs >正文

移动门户网站建设特点爱站网长尾关键词挖掘工具的作用

移动门户网站建设特点,爱站网长尾关键词挖掘工具的作用,关键词怎样做优化排名,微信小程序api是什么文章目录 一、前期准备1.设置GPU2.导入数据3.查看数据 二、数据预处理1.加载数据2.可视化数据3.再次检查数据4.配置数据集 三、模型复现1.DenseLayer2.Transition3.实现DenseNet网络 四、训练模型五、结果可视化总结 🍨 本文为🔗365天深度学习训练营 中的…

文章目录

  • 一、前期准备
    • 1.设置GPU
    • 2.导入数据
    • 3.查看数据
  • 二、数据预处理
    • 1.加载数据
    • 2.可视化数据
    • 3.再次检查数据
    • 4.配置数据集
  • 三、模型复现
    • 1.DenseLayer
    • 2.Transition
    • 3.实现DenseNet网络
  • 四、训练模型
  • 五、结果可视化
  • 总结

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

一、前期准备

1.设置GPU

import tensorflow as tf gpus = tf.config.list_physical_devices("GPU")if gpus:tf.config.experimental.set_memory_growth(gpus[0], True) # 设置GPUtf.config.set_visible_devices([gpus[0]], "GPU")

2.导入数据

import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False   # 用来正常显示负号import os,PIL,pathlib
import numpy as npfrom tensorflow import keras 
from tensorflow.keras import layers, models
data_dir = "8/bird_photos"data_dir = pathlib.Path(data_dir)

3.查看数据

image_count = len(list(data_dir.glob('*/*')))print("图片总数为:", image_count)

图片总数为: 565

二、数据预处理

1.加载数据

batch_size = 8
img_height = 224
img_width = 224
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=123,image_size=(img_height, img_width),batch_size=batch_size)

Found 565 files belonging to 4 classes.
Using 452 files for training.

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=123,image_size=(img_height, img_width),batch_size=batch_size)

Found 565 files belonging to 4 classes.
Using 113 files for validation.

class_names = train_ds.class_names
print(class_names)

[‘Bananaquit’, ‘Black Skimmer’, ‘Black Throated Bushtiti’, ‘Cockatoo’]

2.可视化数据

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5for images, labels in train_ds.take(1):for i in range(8):ax = plt.subplot(2, 4, i + 1)  plt.imshow(images[i].numpy().astype("uint8"))plt.title(class_names[labels[i]])plt.axis("off")
2025-02-21 10:53:06.021196: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype int32 and shape [452][[{{node Placeholder/_4}}]]
2025-02-21 10:53:06.021478: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype string and shape [452][[{{node Placeholder/_0}}]]
2025-02-21 10:53:06.027975: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] ![外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传](https://img-home.csdnimg.cn/images/20230724024159.png?origin_url=output_14_1.png&pos_id=img-GMoW51Gv-1740138320577)

在这里插入图片描述

plt.imshow(images[1].numpy().astype("uint8"))

<matplotlib.image.AxesImage at 0x175a9e920>
在这里插入图片描述

3.再次检查数据

for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

(8, 224, 224, 3)
(8,)

2025-02-21 10:53:07.684073: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [452]
[[{{node Placeholder/_4}}]]
2025-02-21 10:53:07.684953: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [452]
[[{{node Placeholder/_4}}]]

4.配置数据集

AUTOTUNE = tf.data.AUTOTUNEtrain_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、模型复现

1.DenseLayer

import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Modelclass DenseLayer(tf.keras.Model):"""Basic unit of DenseBlock (using bottleneck layer)"""def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):super(DenseLayer, self).__init__()# BatchNorm + ReLU + 1x1 Conv (Bottleneck layer)self.norm1 = layers.BatchNormalization()self.relu1 = layers.ReLU()self.conv1 = layers.Conv2D(filters=bn_size * growth_rate, kernel_size=1, strides=1, use_bias=False, padding="valid")# BatchNorm + ReLU + 3x3 Conv (Feature Extraction)self.norm2 = layers.BatchNormalization()self.relu2 = layers.ReLU()self.conv2 = layers.Conv2D(filters=growth_rate, kernel_size=3, strides=1, padding="same", use_bias=False)self.drop_rate = drop_ratedef call(self, x, training=False):"""Forward pass"""out = self.conv1(self.relu1(self.norm1(x, training=training)))out = self.conv2(self.relu2(self.norm2(out, training=training)))# Dropout if drop_rate > 0if self.drop_rate > 0:out = layers.Dropout(self.drop_rate)(out, training=training)# Concatenate input and new featuresreturn tf.concat([x, out], axis=-1)  # TensorFlow uses `axis=-1` for channels
class DenseBlock(tf.keras.Model):"""Dense Block for DenseNet"""def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):super(DenseBlock, self).__init__()self.layers_list = []# 逐层添加 DenseLayerfor i in range(num_layers):layer = DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)self.layers_list.append(layer)def call(self, x, training=False):"""Forward pass"""for layer in self.layers_list:x = layer(x, training=training)  # 依次通过每个 DenseLayer,并进行拼接return x

2.Transition

class TransitionLayer(tf.keras.Model):"""Transition layer between two adjacent DenseBlock"""def __init__(self, num_input_features, num_output_features):super(TransitionLayer, self).__init__()# Batch Normalization + ReLUself.norm = layers.BatchNormalization()self.relu = layers.ReLU()# 1x1 Convolution to reduce feature mapsself.conv = layers.Conv2D(filters=num_output_features, kernel_size=1, strides=1, use_bias=False, padding="valid")# 2x2 Average Pooling (stride=2)self.pool = layers.AveragePooling2D(pool_size=2, strides=2)def call(self, x, training=False):"""Forward pass"""x = self.conv(self.relu(self.norm(x, training=training)))x = self.pool(x)  # Reduce spatial dimensionsreturn x

3.实现DenseNet网络

class DenseNet(Model):"""DenseNet-BC Model"""def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64,bn_size=4, compression_rate=0.5, drop_rate=0, num_classes=1000):""":param growth_rate: (int) number of filters used in DenseLayer, `k` in the paper:param block_config: (list of 4 ints) number of layers in each DenseBlock:param num_init_features: (int) number of filters in the first Conv2d:param bn_size: (int) the factor using in the bottleneck layer:param compression_rate: (float) the compression rate used in Transition Layer:param drop_rate: (float) the drop rate after each DenseLayer:param num_classes: (int) number of classes for classification"""super(DenseNet, self).__init__()# First Convolutional Layer (Conv2d + BN + ReLU + MaxPool)self.conv0 = layers.Conv2D(filters=num_init_features, kernel_size=7, strides=2,padding="same", use_bias=False)self.norm0 = layers.BatchNormalization()self.relu0 = layers.ReLU()self.pool0 = layers.MaxPooling2D(pool_size=3, strides=2, padding="same")# Dense Blocksself.blocks = []num_features = num_init_featuresfor i, num_layers in enumerate(block_config):block = DenseBlock(num_layers, num_features, bn_size, growth_rate, drop_rate)self.blocks.append(block)num_features += num_layers * growth_rate# Transition Layerif i != len(block_config) - 1:num_output_features = int(num_features * compression_rate)transition = TransitionLayer(num_features, num_output_features)self.blocks.append(transition)num_features = num_output_features# Final BN + ReLUself.norm5 = layers.BatchNormalization()self.relu5 = layers.ReLU()# Global Average Pooling + Fully Connected Layer (Classifier)self.global_avg_pool = layers.GlobalAveragePooling2D()self.classifier = layers.Dense(num_classes)# Weight Initializationself._init_weights()def _init_weights(self):"""Initialize weights similar to PyTorch's kaiming_normal_"""for layer in self.layers:if isinstance(layer, layers.Conv2D):layer.kernel_initializer = tf.keras.initializers.HeNormal()elif isinstance(layer, layers.BatchNormalization):layer.beta_initializer = tf.keras.initializers.Zeros()layer.gamma_initializer = tf.keras.initializers.Ones()elif isinstance(layer, layers.Dense):layer.bias_initializer = tf.keras.initializers.Zeros()def call(self, x, training=False):"""Forward pass"""x = self.conv0(x)x = self.relu0(self.norm0(x, training=training))x = self.pool0(x)for block in self.blocks:x = block(x, training=training)  # Pass through DenseBlocks and TransitionLayersx = self.relu5(self.norm5(x, training=training))x = self.global_avg_pool(x)x = self.classifier(x)return x
def densenet121(pretrained=False, weights_path=None, **kwargs):"""DenseNet121 in TensorFlow"""model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16),**kwargs)if pretrained and weights_path:model.load_weights(weights_path)print(f"Loaded pretrained weights from {weights_path}")return model
model = densenet121(pretrained=False, num_classes=4) 
model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy'])

四、训练模型

epochs = 10history = model.fit(train_ds,validation_data=val_ds,epochs=epochs
)

Epoch 1/10

2025-02-21 10:54:10.969427: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_0’ with dtype string and shape [452]
[[{{node Placeholder/_0}}]]
2025-02-21 10:54:10.969780: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [452]
[[{{node Placeholder/_4}}]]

57/57 [==============================] - ETA: 0s - loss: 5.5120 - accuracy: 0.3230

2025-02-21 10:54:53.306479: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [113]
[[{{node Placeholder/_4}}]]
2025-02-21 10:54:53.306614: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_0’ with dtype string and shape [113]
[[{{node Placeholder/_0}}]]

57/57 [] - 45s 737ms/step - loss: 5.5120 - accuracy: 0.3230 - val_loss: 1.3863 - val_accuracy: 0.2655
Epoch 2/10
57/57 [
] - 43s 750ms/step - loss: 3.0714 - accuracy: 0.4292 - val_loss: 1.3863 - val_accuracy: 0.3186
Epoch 3/10
57/57 [] - 41s 718ms/step - loss: 1.3863 - accuracy: 0.4646 - val_loss: 1.3863 - val_accuracy: 0.3186
Epoch 4/10
57/57 [
] - 41s 717ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.4336
Epoch 5/10
57/57 [] - 41s 716ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.4425
Epoch 6/10
57/57 [
] - 41s 716ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.5310
Epoch 7/10
57/57 [] - 41s 716ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.5398
Epoch 8/10
57/57 [
] - 41s 717ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.5310
Epoch 9/10
57/57 [] - 41s 719ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.5221
Epoch 10/10
57/57 [
] - 41s 716ms/step - loss: 1.3863 - accuracy: 0.4447 - val_loss: 1.3863 - val_accuracy: 0.5398

五、结果可视化

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(epochs)plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

总结

本周主要学习了DenseNet,特别了解了DenseNet的结构,其中最主要的是DenseNet的密集连接机制。使用了上周的数据集进行复现实验,更加深入地了解到了DenseNet的结构以及应用。


文章转载自:

http://UBVVfsDz.fLqbg.cn
http://DDGNqUHE.fLqbg.cn
http://xFAD7Q1u.fLqbg.cn
http://bkWDAXXA.fLqbg.cn
http://LDGiTe7x.fLqbg.cn
http://4xOlTYdZ.fLqbg.cn
http://urFyVAu6.fLqbg.cn
http://EDJFJCDw.fLqbg.cn
http://eThhuYj3.fLqbg.cn
http://PFlNpwls.fLqbg.cn
http://CcJAiNTq.fLqbg.cn
http://kNDQpSts.fLqbg.cn
http://rmsFqGEC.fLqbg.cn
http://EdE6YUqu.fLqbg.cn
http://fbKakJEd.fLqbg.cn
http://JiWrzxMW.fLqbg.cn
http://ZI3TjNu5.fLqbg.cn
http://cUjufyus.fLqbg.cn
http://RkUaZoX6.fLqbg.cn
http://v8wcu9Yl.fLqbg.cn
http://57GQlv8x.fLqbg.cn
http://Se1yCMFS.fLqbg.cn
http://StSDsOYA.fLqbg.cn
http://dpdPQUgs.fLqbg.cn
http://bQvvNprD.fLqbg.cn
http://nKv4I7Z8.fLqbg.cn
http://vKpvjUny.fLqbg.cn
http://iOPj8TMd.fLqbg.cn
http://kkSgTdRS.fLqbg.cn
http://iRV5RrPb.fLqbg.cn
http://www.dtcms.com/wzjs/769320.html

相关文章:

  • 市网站制作打开网址选择浏览器
  • 租赁空间网站建设国外网站赚钱
  • 山东禹城市建设局网站邢台网站设计怎么做
  • 杭州建设网站职称人才工作专题企业建设网站公司有哪些
  • 网站开发项目报告建筑行业网站模板
  • 建设网站建设投标网1249中官网词做广告的软件app
  • 广州网站建设排名网页制作网站制作步骤
  • 让百度收录网站wordpress腾讯云搭建网站
  • 淘宝是什么语言做的网站八喜网站建设
  • 南昌手机建站模板修改wordpress菜单
  • 网站开发需求分析中性能需求建设五证在那个网站可以查
  • 网站建设案例教程常州网站建设怎么样
  • 大型网站一般用什么语言做的linux的wordpress渗透
  • 三优科技 网站开发怎样做网站宣传自己的宾馆
  • 网站页面布局设计思路免费查公司信息的网站
  • 在线一键扒站源码php十堰做网站最好的公司
  • 成都哪家做网站好软装设计风格
  • 沂水网站开发凡客建站官网登录
  • 英文网站建设980怎样建设和维护网站
  • 织梦旅游网站html网站用什么空间
  • 专业分销网站建设网站板块怎么做
  • 个人类网站有哪些免费ppt模板下载简约风
  • 中国能建官网百度seo是啥意思
  • 电脑上怎么建设网站做百度网站需要钱吗
  • 门户网站建设存在问题与不足长安网站建设方案
  • 初中生电脑作业做网站邯郸公众号小程序制作
  • 网站建设云技术公司推荐教学网站模板下载
  • 网站栏目建设方案证件照制作免费版
  • 做虾皮网站凌云网络科技有限公司
  • 网站php怎么做的知乎网站开发用的语言