当前位置: 首页 > news >正文

多块盘创建RAID5以及后增加空间

✅ 创建硬盘并挂载到EC2上,后查询如下

[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk 
├─nvme0n1p1   259:1    0  40G  0 part /
├─nvme0n1p127 259:2    0   1M  0 part 
└─nvme0n1p128 259:3    0  10M  0 part /boot/efi
nvme1n1       259:4    0  25G  0 disk 
nvme2n1       259:5    0  25G  0 disk 
nvme3n1       259:6    0  25G  0 disk 
nvme4n1       259:7    0  25G  0 disk 
nvme5n1       259:8    0  25G  0 disk 
nvme6n1       259:9    0  25G  0 disk 
nvme7n1       259:10   0  25G  0 disk

✅ 创建 RAID 5 阵列

✅ 第一次创建
root@ip-127-0-0-1 data]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
mdadm: Defaulting to version 1.2 metadata
[1040141.720624] raid6: avx512x4 gen() 15874 MB/s
[1040141.900622] raid6: avx512x2 gen() 15900 MB/s
[1040142.080622] raid6: avx512x1 gen() 15608 MB/s
[1040142.260621] raid6: avx2x4   gen() 13282 MB/s
[1040142.440613] raid6: avx2x2   gen() 17548 MB/s
[1040142.620613] raid6: avx2x1   gen() 12584 MB/s
[1040142.634906] raid6: using algorithm avx2x2 gen() 17548 MB/s
[1040142.830616] raid6: .... xor() 17751 MB/s, rmw enabled
[1040142.848740] raid6: using avx512x2 recovery algorithm
[1040142.876973] xor: automatically using best checksumming function   avx       
[1040142.911142] async_tx: api initialized (async)
[1040142.953397] md/raid:md5: device nvme3n1 operational as raid disk 2
[1040142.977724] md/raid:md5: device nvme2n1 operational as raid disk 1
[1040143.007732] md/raid:md5: device nvme1n1 operational as raid disk 0
[1040143.038997] md/raid:md5: raid level 5 active with 3 out of 4 devices, algorithm 2
[1040143.074939] md5: detected capacity change from 0 to 157181952
[1040143.104449] md: recovery of RAID array md5
mdadm: array /dev/md5 started.
✅ 出问题后删除重新创建
root@ip-127-0-0-1 data]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
mdadm:[1040812.304463] md/raid:md5: device nvme3n1 operational as raid disk 2Defaulting to v[1040812.341375] md/raid:md5: device nvme2n1 operational as raid disk 1
[1040812.378576] md/raid:md5: device nvme1n1 operational as raid disk 0
ersion 1.2 metad[1040812.412584] md/raid:md5: raid level 5 active with 3 out of 4 devices, algorithm 2
ata
[1040812.452558] md5: detected capacity change from 0 to 157181952
mdadm: array /dev/md5 started.
[1040812.490559] md: recovery of RAID array md5

✅ 查看创建过程

✅ 查看创建的状态
[root@ip-127-0-0-1 data]# mdadm --detail /dev/md5
/dev/md5:Version : 1.2Creation Time : Tue Apr 29 08:51:21 2025Raid Level : raid5Array Size : 78590976 (74.95 GiB 80.48 GB)Used Dev Size : 26196992 (24.98 GiB 26.83 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Tue Apr 29 08:52:21 2025State : clean, degraded, recovering Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 24% completeName : 5UUID : 765eb9e7:993e38a9:30e4c551:c2d3696bEvents : 4Number   Major   Minor   RaidDevice State0     259        4        0      active sync   /dev/sdb1     259        5        1      active sync   /dev/sdc2     259        6        2      active sync   /dev/sdd4     259        7        3      spare rebuilding   /dev/sde
✅ 查看创建进度
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_][==>..................]  recovery = 13.0% (3417244/26196992) finish=4.4min speed=85522K/secunused devices: <none>
等到进度条达到100%,大概要等个6-8min,否则后面会报错
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_][===================>.]  recovery = 98.8% (25887256/26196992) finish=0.0min speed=87390K/secunused devices: <none>
[root@ip-127-0-0-1 data]# [1041121.201669] md: md5: recovery done.
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

✅ 配置自动挂载 (持久化)

✅ 更新 mdadm 配置
[root@ip-127-0-0-1 data]# mkfs.xfs /dev/md5
mkfs.xfs: /dev/md5 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
✅ 查看RAID5属性
[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk  
├─nvme0n1p1   259:1    0  40G  0 part  /
├─nvme0n1p127 259:2    0   1M  0 part  
└─nvme0n1p128 259:3    0  10M  0 part  /boot/efi
nvme1n1       259:4    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme2n1       259:5    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme3n1       259:6    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme4n1       259:7    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
✅ 更改mdadm.conf配置
[root@ip-127-0-0-1 data]# mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
ARRAY /dev/md5 metadata=1.2 name=5 UUID=bf661b1a:944c5721:0250d992:e188b1b9m.conf
✅ 查看RAID5的id
[root@ip-127-0-0-1 data]# blkid /dev/md5
/dev/md5: UUID="f8efe843-ed80-4ad0-bbc4-5c18677b257f" BLOCK_SIZE="512" TYPE="xfs"
✅ 配置开启自动挂载
[root@ip-127-0-0-1 data]# tail -1 /etc/fstab 
UUID=f8efe843-ed80-4ad0-bbc4-5c18677b257f /data/raid-storge/  xfs  defaults,nofail  0  0
[root@ip-127-0-0-1 data]# mount -a
[1041519.933683] XFS (md5): Mounting V5 Filesystem
[1041520.000302] XFS (md5): Ending clean mount
✅ 查看挂载后的路径目录大小
[root@ip-127-0-0-1 data]# df -h
Filesystem        Size  Used Avail Use% Mounted on
devtmpfs          4.0M     0  4.0M   0% /dev
tmpfs             3.9G     0  3.9G   0% /dev/shm
tmpfs             1.6G  636K  1.6G   1% /run
/dev/nvme0n1p1     40G  5.4G   35G  14% /
tmpfs             3.9G     0  3.9G   0% /tmp
/dev/nvme0n1p128   10M  1.3M  8.7M  13% /boot/efi
overlay            40G  5.4G   35G  14% /var/lib/docker/overlay2/84699b7470c48b0c4a1cb8b91b868be21f96c388de173f25df9ac741be7d0d0e/merged
tmpfs             782M     0  782M   0% /run/user/1000
/dev/md5           75G  568M   75G   1% /data/raid-storge
⚠️ 注意事项
遇到如下报错,错误原因是没有等RAID5生成进度达到100%就开始格式化了,所以报错了
[root@ip-127-0-0-1 data]# mount -a
[1041253.813403] XFS (md5): Mounting V5 Filesystem
[1041253.829487] XFS (md5): totally zeroed log
[1041253.849364] XFS (md5): Corruption warning: Metadata has LSN (1:352) ahead of current LSN (1:0). Please unmount and run xfs_repair (>= v4.3) to resolve.
[1041253.914765] XFS (md5): log mount/recovery failed: error -22
[1041253.942365] XFS (md5): log mount failed
mount: /data/raid-storge: wrong fs type, bad option, bad superblock on /dev/md5, missing codepage or helper program, or other error.
解决办法
强制清理 RAID 设备上的所有文件系统签名
[root@ip-127-0-0-1 data]# wipefs -a /dev/md5
/dev/md5: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42
# 再次确认设备干净:
[root@ip-127-0-0-1 data]# wipefs /dev/md5
# 用 dd 擦除前部空间(包括日志区域和超级块)
[root@ip-127-0-0-1 data]# dd if=/dev/zero of=/dev/md5 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.185719 s, 565 MB/s
# 重新创建 XFS 文件系统
[root@ip-127-0-0-1 data]# mkfs.xfs -f /dev/md5
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md5               isize=512    agcount=16, agsize=1227904 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=1, sparse=1, rmapbt=0=                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=19646464, imaxpct=25=                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

✅ 通过添加新盘的方式对RAID5进行扩容

✅ 确认新盘无分区
[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk  
├─nvme0n1p1   259:1    0  40G  0 part  /
├─nvme0n1p127 259:2    0   1M  0 part  
└─nvme0n1p128 259:3    0  10M  0 part  /boot/efi
nvme1n1       259:4    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme2n1       259:5    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme3n1       259:6    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme4n1       259:7    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme5n1       259:8    0  25G  0 disk 
✅ 如果 /dev/nvme5n1 上有分区或文件系统,请清除:
[root@ip-127-0-0-1 data]# wipefs -a /dev/nvme5n1
[root@ip-127-0-0-1 data]# dd if=/dev/zero of=/dev/nvme5n1 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.207361 s, 506 MB/s
✅ 添加磁盘到阵列

这会把它加为 spare disk

[root@ip-127-0-0-1 data]# mdadm --add /dev/md5 /dev/nvme5n1
✅ 扩展 RAID5 阵列(增加设备数)
[root@ip-127-0-0-1 data]# mdadm --grow /dev/md5 --raid-devices=5
✅ 查看扩容进度
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 nvme5n1[5] nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU][>....................]  reshape =  4.5% (1202548/26196992) finish=12.4minspeed=33539K/secunused devices: <none>
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme5n1[5] nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU][=====>...............]  reshape = 28.3% (7431172/26196992) finish=9.4min speed=33237K/secunused devices: <none>
[root@ip-127-0-0-1 data]# [1042541.211177] md: md5: reshape done.
[1042541.299852] md5: detected capacity change from 157181952 to 209575936
✅ 更新 mdadm 配置文件
[root@ip-127-0-0-1 data]# mdadm --detail --scan >> /etc/mdadm.conf
✅ 扩展文件系统
如果你用的是 XFS 文件系统:
[root@ip-127-0-0-1 data]# xfs_growfs /data/raid-storge
meta-data=/dev/md5               isize=512    agcount=16, agsize=1227904 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=1, sparse=1, rmapbt=0=                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=19646464, imaxpct=25=                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 19646464 to 26196992
如果是 ext4:
resize2fs /dev/md5

必须在挂载状态下操作,否则可能报错。

✅ 验证新容量
[root@ip-127-0-0-1 data]# df -h /data/raid-storge
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5        100G  747M  100G   1% /data/raid-storge
⚠️ 注意事项:

reshape 操作是高风险操作,务必提前备份数据。
reshape 过程中不要断电、重启、格式化。
reshape 很慢,尤其磁盘较大时,可能需要几个小时或更久。

相关文章:

  • 小结:PKI(Public Key Infrastructure,公钥基础设施)
  • CSdiy java 06
  • 西门子笔记四:Uart模块
  • 异步机制与 CPU 的关系解析
  • C++/SDL 进阶游戏开发 —— 双人塔防(代号:村庄保卫战 15)
  • 当 AI 成为 “数字新物种”:人类职业的重构与进化
  • 角度(degrees)和弧度(radians)转换关系
  • Glide 如何加载远程 Base64 图片
  • 链表反转操作经典问题详解
  • 关于 const a 定义的数据 与 其渲染 的问题。即通过const定义的常量,会不会导致渲染不及时。
  • 原语的使用
  • 归并排序排序总结
  • 创建RAID1并扩容RAID
  • 使用C# ASP.NET创建一个可以由服务端推送信息至客户端的WEB应用(1)
  • Redis分布式锁使用以及对接支付宝,paypal,strip跨境支付
  • Qwen3-8B安装与体验-速度很快!
  • 国内无法访问GitHub官网的问题解决
  • 碰到的 MRCPv2 串线以及解决思路
  • C语言Makefile编写与使用指南
  • centos7 安装python3
  • 不准打小孩:童年逆境经历视角下的生育友好社会
  • 民生银行一季度净利127.42亿降逾5%,营收增7.41%
  • 发布亮眼一季度报后,东阿阿胶股价跌停:现金流隐忧引发争议
  • 解放日报头版:人民城市共建共享展新卷
  • 王毅出席金砖国家外长会晤
  • 新造古镇丨上海古镇朱家角一年接待164万境外游客,凭啥?