当前位置: 首页 > news >正文

深度学习“调参”黑话手册:学习率、Batch Size、Epoch都是啥?

点击AladdinEdu,同学们用得起的【H卡】算力平台”,注册即送-H卡级别算力80G大显存按量计费灵活弹性顶级配置学生更享专属优惠


引言:从"炼丹"到科学,揭开调参的神秘面纱

“炼丹"是深度学习圈内对参数调整的形象比喻——看似神秘且依赖经验,但实际上,超参数调优是一门可以通过科学方法和工具掌握的技艺。刚入门的开发者常常被学习率、批量大小、Epoch等术语困扰,不知道如何合理设置这些参数。本文将通过可视化工具Weights & Biases(W&B),将这些抽象概念具象化,带你从"玄学调参"走向"科学调参”。

无论你是刚接触深度学习的新手,还是有一定经验但想系统学习调参的开发者,这篇指南都将帮助你建立系统的调参思维,理解各个超参数背后的原理,并掌握高效的实验管理方法。我们将通过大量可视化示例和实际代码,让你真正理解这些参数如何影响模型训练。

1. 环境准备与工具配置

1.1 安装必要的库

在开始之前,我们需要安装一些必要的Python库:

pip install torch torchvision torchaudio
pip install wandb matplotlib numpy scikit-learn
pip install tensorboard

1.2 配置Weights & Biases

Weights & Biases(W&B)是一个强大的实验跟踪工具,可以帮助我们可视化和比较不同超参数的效果。

import wandb
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_classification
from torch.utils.data import DataLoader, TensorDataset# 登录W&B(首次使用需要注册账号)
wandb.login()# 创建一个简单的分类数据集用于演示
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)
X = torch.FloatTensor(X)
y = torch.LongTensor(y)
dataset = TensorDataset(X, y)

2. 核心超参数深度解析

2.1 学习率(Learning Rate):模型学习的步伐

学习率是最重要的超参数,它控制着模型参数更新的步长大小。我们可以通过一个简单的比喻来理解:寻找山谷的最低点(最小损失值)。

  • 学习率过大:像巨人迈步,可能跨过最低点甚至导致发散
  • 学习率过小:像蚂蚁爬行,收敛速度极慢,可能陷入局部最优
  • 学习率适中:能够快速且稳定地到达最低点
2.1.1 学习率可视化实验
def test_learning_rates():"""测试不同学习率对训练过程的影响"""learning_rates = [0.0001, 0.001, 0.01, 0.1, 1.0]# 定义简单模型class SimpleModel(nn.Module):def __init__(self, input_size=20, hidden_size=10, output_size=2):super(SimpleModel, self).__init__()self.fc1 = nn.Linear(input_size, hidden_size)self.fc2 = nn.Linear(hidden_size, output_size)def forward(self, x):x = torch.relu(self.fc1(x))x = self.fc2(x)return xresults = {}for lr in learning_rates:# 初始化W&B运行wandb.init(project="learning-rate-demo", name=f"lr_{lr}",config={"learning_rate": lr})model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=lr)train_loader = DataLoader(dataset, batch_size=32, shuffle=True)losses = []for epoch in range(50):epoch_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()epoch_loss += loss.item()avg_loss = epoch_loss / len(train_loader)losses.append(avg_loss)# 记录到W&Bwandb.log({"loss": avg_loss, "epoch": epoch})results[lr] = losseswandb.finish()# 绘制比较图plt.figure(figsize=(12, 8))for lr, loss_values in results.items():plt.plot(loss_values, label=f"LR={lr}", linewidth=2)plt.xlabel("Epoch")plt.ylabel("Loss")plt.title("不同学习率下的训练损失曲线")plt.legend()plt.grid(True)plt.savefig("learning_rate_comparison.png", dpi=300, bbox_inches='tight')plt.show()# 运行学习率实验
test_learning_rates()
2.1.2 学习率查找技巧
def find_optimal_lr():"""使用学习率范围测试找到最佳学习率"""model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=1e-6)# 学习率范围测试lr_multiplier = (1e-1 / 1e-6) ** (1/100)  # 从1e-6到0.1,100步train_loader = DataLoader(dataset, batch_size=32, shuffle=True)losses = []learning_rates = []for batch_idx, (batch_x, batch_y) in enumerate(train_loader):if batch_idx >= 100:  # 测试100个批次break# 更新学习率lr = 1e-6 * (lr_multiplier ** batch_idx)for param_group in optimizer.param_groups:param_group['lr'] = lroptimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()losses.append(loss.item())learning_rates.append(lr)# 绘制损失 vs 学习率plt.figure(figsize=(12, 6))plt.semilogx(learning_rates, losses)plt.xlabel("Learning Rate")plt.ylabel("Loss")plt.title("学习率范围测试")plt.grid(True)plt.savefig("lr_range_test.png", dpi=300, bbox_inches='tight')plt.show()# 找到损失下降最快的学习率min_loss_idx = np.argmin(losses)optimal_lr = learning_rates[min_loss_idx]print(f"建议学习率: {optimal_lr:.6f}")return optimal_lr# 运行学习率查找
optimal_lr = find_optimal_lr()

2.2 批量大小(Batch Size):一次学习的样本数

批量大小影响梯度估计的准确性和训练速度,需要在内存使用和训练稳定性之间找到平衡。

2.2.1 批量大小的影响
  • 小批量:梯度估计噪声大,正则化效果好,收敛慢但可能找到更优解
  • 大批量:梯度估计准确,训练速度快,但可能泛化能力差
def test_batch_sizes():"""测试不同批量大小对训练的影响"""batch_sizes = [8, 16, 32, 64, 128]results = {}for bs in batch_sizes:wandb.init(project="batch-size-demo", name=f"bs_{bs}",config={"batch_size": bs})model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=0.01)train_loader = DataLoader(dataset, batch_size=bs, shuffle=True)losses = []for epoch in range(30):epoch_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()epoch_loss += loss.item()avg_loss = epoch_loss / len(train_loader)losses.append(avg_loss)wandb.log({"loss": avg_loss, "epoch": epoch})results[bs] = losseswandb.finish()# 可视化结果plt.figure(figsize=(12, 8))for bs, loss_values in results.items():plt.plot(loss_values, label=f"Batch Size={bs}", linewidth=2)plt.xlabel("Epoch")plt.ylabel("Loss")plt.title("不同批量大小下的训练损失")plt.legend()plt.grid(True)plt.savefig("batch_size_comparison.png", dpi=300, bbox_inches='tight')plt.show()# 运行批量大小实验
test_batch_sizes()
2.2.2 批量大小与学习率的关系

一般来说,批量大小增加时,学习率也应该相应增加:

def test_batch_size_lr_relationship():"""测试批量大小与学习率的关系"""combinations = [(16, 0.01),(32, 0.02), (64, 0.04),(128, 0.08)]results = {}for bs, lr in combinations:wandb.init(project="bs-lr-relationship", name=f"bs_{bs}_lr_{lr}",config={"batch_size": bs, "learning_rate": lr})model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=lr)train_loader = DataLoader(dataset, batch_size=bs, shuffle=True)losses = []for epoch in range(30):epoch_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()epoch_loss += loss.item()avg_loss = epoch_loss / len(train_loader)losses.append(avg_loss)wandb.log({"loss": avg_loss, "epoch": epoch})results[f"BS{bs}_LR{lr}"] = losseswandb.finish()return results# 运行批量大小与学习率关系实验
bs_lr_results = test_batch_size_lr_relationship()

2.3 Epoch:完整遍历数据的次数

Epoch数决定模型看到训练数据的次数,直接影响欠拟合和过拟合。

2.3.1 早停技术(Early Stopping)

为了防止过拟合,我们可以使用早停技术:

def train_with_early_stopping(patience=5):"""使用早停技术训练模型"""wandb.init(project="early-stopping-demo", config={"patience": patience})model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(model.parameters(), lr=0.001)# 分割训练集和验证集train_size = int(0.8 * len(dataset))val_size = len(dataset) - train_sizetrain_dataset, val_dataset = torch.utils.data.random_split(dataset, [train_size, val_size])train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False)best_val_loss = float('inf')patience_counter = 0best_model_state = Nonefor epoch in range(100):  # 最大epoch数# 训练阶段model.train()train_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()train_loss += loss.item()# 验证阶段model.eval()val_loss = 0with torch.no_grad():for batch_x, batch_y in val_loader:outputs = model(batch_x)loss = criterion(outputs, batch_y)val_loss += loss.item()avg_train_loss = train_loss / len(train_loader)avg_val_loss = val_loss / len(val_loader)wandb.log({"epoch": epoch,"train_loss": avg_train_loss,"val_loss": avg_val_loss})# 早停逻辑if avg_val_loss < best_val_loss:best_val_loss = avg_val_losspatience_counter = 0best_model_state = model.state_dict().copy()else:patience_counter += 1if patience_counter >= patience:print(f"早停触发于第 {epoch} 轮")break# 恢复最佳模型model.load_state_dict(best_model_state)wandb.finish()return model# 运行早停示例
model_with_early_stopping = train_with_early_stopping(patience=5)

3. 高级超参数调试技巧

3.1 学习率调度策略

3.1.1 常见学习率调度器比较
def compare_lr_schedulers():"""比较不同学习率调度器"""schedulers_to_test = {"StepLR": {"step_size": 10, "gamma": 0.1},"ExponentialLR": {"gamma": 0.95},"CosineAnnealingLR": {"T_max": 50},"ReduceLROnPlateau": {"patience": 5, "factor": 0.5}}results = {}for sched_name, sched_params in schedulers_to_test.items():wandb.init(project="lr-scheduler-demo", name=sched_name,config=sched_params)model = SimpleModel()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=0.1)# 创建调度器if sched_name == "StepLR":scheduler = optim.lr_scheduler.StepLR(optimizer, **sched_params)elif sched_name == "ExponentialLR":scheduler = optim.lr_scheduler.ExponentialLR(optimizer, **sched_params)elif sched_name == "CosineAnnealingLR":scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, **sched_params)elif sched_name == "ReduceLROnPlateau":scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, **sched_params)train_loader = DataLoader(dataset, batch_size=32, shuffle=True)lr_history = []for epoch in range(50):# 训练步骤(简化)current_lr = optimizer.param_groups[0]['lr']lr_history.append(current_lr)wandb.log({"epoch": epoch,"learning_rate": current_lr})# 更新学习率if sched_name == "ReduceLROnPlateau":# 假设的验证损失scheduler.step(0.9 - epoch * 0.01)else:scheduler.step()results[sched_name] = lr_historywandb.finish()# 绘制学习率变化曲线plt.figure(figsize=(12, 8))for sched_name, lr_values in results.items():plt.plot(lr_values, label=sched_name, linewidth=2)plt.xlabel("Epoch")plt.ylabel("Learning Rate")plt.title("不同学习率调度器的比较")plt.legend()plt.grid(True)plt.savefig("lr_schedulers_comparison.png", dpi=300, bbox_inches='tight')plt.show()# 运行学习率调度器比较
compare_lr_schedulers()

3.2 正则化超参数

3.2.1 Dropout比率调优
def test_dropout_rates():"""测试不同Dropout比率的影响"""dropout_rates = [0.0, 0.2, 0.4, 0.5, 0.6]class ModelWithDropout(nn.Module):def __init__(self, dropout_rate):super(ModelWithDropout, self).__init__()self.fc1 = nn.Linear(20, 50)self.dropout = nn.Dropout(dropout_rate)self.fc2 = nn.Linear(50, 2)def forward(self, x):x = torch.relu(self.fc1(x))x = self.dropout(x)x = self.fc2(x)return xresults = {}for dropout_rate in dropout_rates:wandb.init(project="dropout-demo", name=f"dropout_{dropout_rate}",config={"dropout_rate": dropout_rate})model = ModelWithDropout(dropout_rate)criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(model.parameters(), lr=0.001)# 分割训练集和验证集train_size = int(0.8 * len(dataset))val_size = len(dataset) - train_sizetrain_dataset, val_dataset = torch.utils.data.random_split(dataset, [train_size, val_size])train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False)train_losses = []val_losses = []for epoch in range(50):# 训练model.train()train_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()train_loss += loss.item()# 验证model.eval()val_loss = 0with torch.no_grad():for batch_x, batch_y in val_loader:outputs = model(batch_x)loss = criterion(outputs, batch_y)val_loss += loss.item()avg_train_loss = train_loss / len(train_loader)avg_val_loss = val_loss / len(val_loader)train_losses.append(avg_train_loss)val_losses.append(avg_val_loss)wandb.log({"epoch": epoch,"train_loss": avg_train_loss,"val_loss": avg_val_loss})results[dropout_rate] = {"train_loss": train_losses,"val_loss": val_losses}wandb.finish()return results# 运行Dropout实验
dropout_results = test_dropout_rates()

4. 实验跟踪与管理最佳实践

4.1 使用W&B进行超参数扫描

def hyperparameter_sweep():"""使用W&B进行超参数扫描"""sweep_config = {'method': 'bayes',  # 使用贝叶斯优化'metric': {'name': 'val_loss','goal': 'minimize'   },'parameters': {'learning_rate': {'min': 0.0001,'max': 0.1},'batch_size': {'values': [16, 32, 64, 128]},'optimizer': {'values': ['adam', 'sgd', 'rmsprop']},'dropout_rate': {'min': 0.0,'max': 0.7}}}sweep_id = wandb.sweep(sweep_config, project="hyperparameter-sweep-demo")def train():# 初始化W&B运行wandb.init()config = wandb.config# 创建模型model = SimpleModel()criterion = nn.CrossEntropyLoss()# 选择优化器if config.optimizer == 'adam':optimizer = optim.Adam(model.parameters(), lr=config.learning_rate)elif config.optimizer == 'sgd':optimizer = optim.SGD(model.parameters(), lr=config.learning_rate)else:  # rmspropoptimizer = optim.RMSprop(model.parameters(), lr=config.learning_rate)# 创建数据加载器train_loader = DataLoader(dataset, batch_size=config.batch_size, shuffle=True)# 训练循环for epoch in range(30):model.train()train_loss = 0for batch_x, batch_y in train_loader:optimizer.zero_grad()outputs = model(batch_x)loss = criterion(outputs, batch_y)loss.backward()optimizer.step()train_loss += loss.item()avg_loss = train_loss / len(train_loader)wandb.log({"train_loss": avg_loss, "epoch": epoch})# 运行超参数扫描wandb.agent(sweep_id, train, count=20)  # 运行20次实验# 执行超参数扫描
hyperparameter_sweep()

4.2 实验结果分析与可视化

def analyze_sweep_results():"""分析超参数扫描结果"""# 连接到W&B APIapi = wandb.Api()# 获取项目中的所有运行runs = api.runs("hyperparameter-sweep-demo")# 收集结果results = []for run in runs:if run.state == "finished":results.append({"id": run.id,"name": run.name,"config": run.config,"train_loss": run.summary.get("train_loss", None)})# 找出最佳运行best_run = min(results, key=lambda x: x["train_loss"] if x["train_loss"] else float('inf'))print("最佳运行配置:")for key, value in best_run["config"].items():print(f"  {key}: {value}")print(f"最佳训练损失: {best_run['train_loss']}")# 可视化超参数重要性(需要在W&B界面查看)print("\n请在Weights & Biases网站查看详细的可视化分析:")print("- 超参数重要性图")print("- 平行坐标图")print("- 损失曲面图")# 分析结果
analyze_sweep_results()

5. 实用调参指南与技巧

5.1 调参优先级清单

根据实践经验,以下是一份调参优先级清单:

def hyperparameter_priority_list():"""调参优先级清单"""priorities = [{"level": "高优先级","parameters": ["学习率", "优化器", "模型架构"],"description": "这些参数对模型性能影响最大,应该首先调整","tips": ["从学习率范围测试开始","Adam通常是较好的默认选择","根据任务复杂度选择模型深度和宽度"]},{"level": "中优先级", "parameters": ["批量大小", "正则化参数", "数据增强"],"description": "这些参数影响训练稳定性和泛化能力","tips": ["批量大小通常设为2的幂次方","Dropout比率一般在0.2-0.5之间","数据增强可以显著改善泛化能力"]},{"level": "低优先级","parameters": ["学习率调度", "早停耐心值", "权重初始化"],"description": "这些是精细化调优参数,在基础调优完成后进行","tips": ["学习率调度可以提升最终性能","早停耐心值通常设为5-10个epoch","现代深度学习框架通常有合理的默认初始化"]}]print("深度学习调参优先级指南:")print("=" * 50)for level_info in priorities:print(f"\n{level_info['level']}:")print(f"  参数: {', '.join(level_info['parameters'])}")print(f"  描述: {level_info['description']}")print("  技巧:")for tip in level_info['tips']:print(f"    • {tip}")# 打印调参优先级指南
hyperparameter_priority_list()

5.2 常见问题与解决方案

def troubleshooting_guide():"""调参问题排查指南"""problems = [{"problem": "训练损失不下降","possible_causes": ["学习率太小","模型架构太简单","梯度消失问题","数据预处理错误"],"solutions": ["增大学习率或进行学习率范围测试","增加模型复杂度","使用ReLU等激活函数,添加BatchNorm","检查数据标准化和预处理流程"]},{"problem": "验证损失上升(过拟合)","possible_causes": ["模型复杂度过高","训练数据不足", "正则化不足","训练时间太长"],"solutions": ["简化模型或增加正则化","增加数据或使用数据增强","增加Dropout或权重衰减","使用早停技术"]},{"problem": "训练过程不稳定","possible_causes": ["学习率太大","批量大小太小","梯度爆炸","数据噪声太大"],"solutions": ["减小学习率或使用学习率预热","增大批量大小或使用梯度累积","使用梯度裁剪","清理数据或增加数据质量"]}]print("\n常见调参问题排查指南:")print("=" * 50)for issue in problems:print(f"\n问题: {issue['problem']}")print("可能原因:")for cause in issue['possible_causes']:print(f"  • {cause}")print("解决方案:")for solution in issue['solutions']:print(f"  • {solution}")# 打印问题排查指南
troubleshooting_guide()

6. 总结与进阶学习

6.1 关键知识点回顾

通过本文,我们深入探讨了深度学习中最重要的超参数:

  1. 学习率:控制参数更新步长,是最重要的超参数
  2. 批量大小:影响梯度估计质量和训练速度
  3. Epoch数:决定模型看到数据的次数,需要防止过拟合
  4. 正则化参数:包括Dropout、权重衰减等,控制模型复杂度

6.2 进阶学习资源

为了进一步提高调参技能,推荐以下学习资源:

  1. 论文阅读

    • “Adam: A Method for Stochastic Optimization”
    • “Cyclical Learning Rates for Training Neural Networks”
    • “Bag of Tricks for Image Classification with Convolutional Neural Networks”
  2. 实用工具

    • Weights & Biases:实验跟踪和可视化
    • Optuna:超参数优化框架
    • TensorBoard:TensorFlow的可视化工具包
  3. 实践建议

    • 从小型实验开始,逐步增加复杂度
    • 建立系统的实验记录习惯
    • 学会阅读和分析训练曲线
    • 参与开源项目,学习他人的调参经验

6.3 最终建议

记住,调参是一门需要理论与实践结合的技艺。最好的学习方式是通过实际项目积累经验,同时保持对原理的深入理解。使用像Weights & Biases这样的工具可以帮助你系统化调参过程,从"炼丹"走向科学。

开始你的调参之旅吧!选择一个感兴趣的项目,应用本文介绍的技术,亲身体验超参数如何影响模型性能。随着经验的积累,你会逐渐发展出属于自己的调参直觉和方法论。


点击AladdinEdu,同学们用得起的【H卡】算力平台”,注册即送-H卡级别算力80G大显存按量计费灵活弹性顶级配置学生更享专属优惠


文章转载自:

http://H0kQtGsH.bkppb.cn
http://zEBaedw5.bkppb.cn
http://uItxUZoB.bkppb.cn
http://xUZGDcKn.bkppb.cn
http://JODgy0yZ.bkppb.cn
http://JRyS8fSR.bkppb.cn
http://ztPNmG47.bkppb.cn
http://LCzzRL3o.bkppb.cn
http://FPWIHQJD.bkppb.cn
http://UCei1FPn.bkppb.cn
http://L78Znmk5.bkppb.cn
http://JkYDoP2A.bkppb.cn
http://AYx1QNVr.bkppb.cn
http://umq7xxt5.bkppb.cn
http://QRZAH9Z5.bkppb.cn
http://dYvVmiXM.bkppb.cn
http://lGzUaOwv.bkppb.cn
http://kPa5Rl6r.bkppb.cn
http://UMdd7CtP.bkppb.cn
http://geri9cH9.bkppb.cn
http://n3v9K1Uh.bkppb.cn
http://MUusaiy1.bkppb.cn
http://bEO7gB0o.bkppb.cn
http://RzAhw0oE.bkppb.cn
http://xNzX12EK.bkppb.cn
http://7QhvfzSl.bkppb.cn
http://NDB5AJNZ.bkppb.cn
http://LosR6Kd8.bkppb.cn
http://91Db7xPx.bkppb.cn
http://jzbdcztC.bkppb.cn
http://www.dtcms.com/a/386900.html

相关文章:

  • Vue: 组件 Props
  • spring通过Spring Integration实现tcp通信
  • 改革企业治理架构,构建国有企业全面预算管理体系
  • 网络概述学习
  • VRRP 实验
  • confulence平台
  • 非许可型区块链
  • 如何使用词嵌入模型
  • 从一个想法到上线:Madechango项目架构设计全解析
  • pytest入门
  • 设计模式第二章(装饰器模式)
  • ​​解决大模型幻觉全攻略:理论、技术与落地实践​
  • qt QCandlestickSeries详解
  • 量化研究--高频日内网格T0策略研究
  • [Dify] 自动摘要与精炼:构建内容浓缩型工作流的实践指南
  • Windows安装mamba最佳实践(WSL ubuntu丝滑版)
  • 黑马头条_SpringCloud项目阶段一:环境搭建(Mac版本)
  • Java 设计模式全景解析
  • 【Python】OS模块操作目录
  • 深度学习基本模块:LSTM 长短期记忆网络
  • 初始化Vue3 项目
  • 耕地质量评价
  • MeloTTS安装实践
  • 国产化芯片ZCC3790--同步升降压控制器的全新选择, 替代LT3790
  • LeetCode 977.有序数组的平方
  • 佳易王个体诊所中西医电子处方管理系统软件教程详解:开方的时候可一键导入配方模板,自由添加模板
  • C#实现WGS-84到西安80坐标系转换的完整指南
  • rabbitmq面试题总结
  • 【Java初学基础】⭐Object()顶级父类与它的重要方法equals()
  • C语言初尝试——洛谷