创建RAID1并扩容RAID
创建RAID1
[root@ip-127-0-0-1 data]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 40G 0 disk
├─nvme0n1p1 259:1 0 40G 0 part /
├─nvme0n1p127 259:2 0 1M 0 part
└─nvme0n1p128 259:3 0 10M 0 part /boot/efi
nvme1n1 259:4 0 25G 0 disk
nvme2n1 259:5 0 25G 0 disk
nvme3n1 259:6 0 25G 0 disk
nvme4n1 259:7 0 25G 0 disk
nvme5n1 259:8 0 25G 0 disk
nvme6n1 259:9 0 15G 0 disk
nvme7n1 259:10 0 15G 0 disk
root@ip-127-0-0-1 data]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/nvme6n1 /dev/nvme7n1
mdadm: Note: this array has metadata at the start and --level=1 --raid-devices=2 /dev/nvme6n1 /dev/nvme7n1may not be suitable as a boot device. If you plan tostore '/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
[1038776.054347] md/raid1:md1: not clean -- starting background reconstruction
[1038776.083760] md/raid1:md1: active with 2 out of 2 mirrors
[1038776.109884] md1: detected capacity change from 0 to 31438848
mdadm: array /dev/md1 started.
[1038776.146445] md: resync of RAID array md1
注
RAID 阵列的元数据(metadata)默认是放在磁盘开头的(v1.x 格式),有些老旧或特殊配置的 bootloader(比如 GRUB)可能无法识别这种 RAID metadata 放在开头的磁盘,从而导致 系统无法从这个阵列引导启动。
✅ 是否影响你?
如果你只是用 RAID 存储数据(如 /data、/home 等)而不是 /boot 分区:
💡 完全可以忽略这个提示,照常创建即可。
如果你计划将 /boot 存储在 RAID 阵列中(不常见,且复杂):
🔧 那就建议使用 --metadata=0.90 参数,这样元数据会写在磁盘末尾,大部分 bootloader 都能识别。
若都没有以上这些情况则可以忽略一下报提示
mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device. If you plan tostore '/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90
Continue creating array? yes
查看RAID的信息
[root@ip-127-0-0-1 data]# mdadm --detail /dev/md1
/dev/md1:Version : 1.2Creation Time : Tue Apr 29 08:28:36 2025Raid Level : raid1Array Size : 15719424 (14.99 GiB 16.10 GB)Used Dev Size : 15719424 (14.99 GiB 16.10 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Tue Apr 29 08:29:21 2025State : clean, resyncing Active Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncResync Status : 40% completeName : 1UUID : e679c81e:f591945a:63eae942:ea89c711Events : 6Number Major Minor RaidDevice State0 259 9 0 active sync /dev/sdg1 259 10 1 active sync /dev/sdh
配置自动挂载 (持久化)
格式化RAID1
[root@ip-127-0-0-1 data]# mkfs.xfs /dev/md1
meta-data=/dev/md1 isize=512 agcount=16, agsize=245616 blks= sectsz=512 attr=2, projid32bit=1= crc=1 finobt=1, sparse=1, rmapbt=0= reflink=1 bigtime=1 inobtcount=1
data = bsize=4096 blocks=3929856, imaxpct=25= sunit=1 swidth=1 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2= sectsz=512 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
更新 mdadm 配置
[root@ip-127-0-0-1 data]# mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
ARRAY /dev/md1 metadata=1.2 name=1 UUID=e679c81e:f591945a:63eae942:ea89c711m.conf
更新/etc/fstab
文件
[root@ip-127-0-0-1 data]# blkid /dev/md1
/dev/md1: UUID="8eb49af3-bfe2-4fff-a70c-2c6b2bb378d5" BLOCK_SIZE="512" TYPE="xfs"
[root@ip-172-31-26-146 data]# tail -1 /etc/fstab
UUID=8eb49af3-bfe2-4fff-a70c-2c6b2bb378d5 /data/raid-storge/ xfs defaults,nofail 0 0
挂载
[root@ip-127-0-0-1 data]# mount -a
[1038973.863330] XFS (md1): Mounting V5 Filesystem
[1038973.918334] XFS (md1): Ending clean mount
查看挂载后路径下的大小
[root@ip-172-31-26-146 data]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 15G 140M 15G 1% /data/raid-storge
扩充RAID1的大小
查看扩充好后的硬盘情况
[root@ip-127-0-0-1 data]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme6n1 259:9 0 25G 0 disk
└─md1 9:1 0 15G 0 raid1 /data/raid-storge
nvme7n1 259:10 0 25G 0 disk
└─md1 9:1 0 15G 0 raid1 /data/raid-storge
造一些数据并查看硬盘是否分区
[root@ip-127-0-0-1 data]# du -sh ./*
1.7G ./raid-storge
[root@ip-127-0-0-1 data]# fdisk -l /dev/nvme6n1
Disk /dev/nvme6n1: 25 GiB, 26843545600 bytes, 52428800 sectors
Disk model: Amazon Elastic Block Store
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
查看两块是否都在线
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 nvme7n1[1] nvme6n1[0]15719424 blocks super 1.2 [2/2] [UU]unused devices: <none>
md1:你的 RAID 设备名称;
raid1:RAID 类型;
nvme6n1 和 nvme7n1:两块磁盘,成员编号分别是 [0] 和 [1];
[2/2]:表示预期有 2 块磁盘,实际也有 2 块在线;
[UU]:
每个 U 表示一个成员盘是 “Up”(在线);
如果看到 [U_] 就表示有一块盘掉了或在同步。
动态扩容RAID1
[root@ip-127-0-0-1 data]# mdadm --grow /dev/md1 --size=max
[1039345.925628] md1: detected capacity change from 31438848 to 52410368
mdadm: component[1039345.952526] md: resync of RAID array md1size of /dev/md1 has been set to 26205184K
刷新扩容后的挂在空间
[root@ip-127-0-0-1 data]# xfs_growfs /data/raid-storge/
meta-data=/dev/md1 isize=512 agcount=16, agsize=245616 blks= sectsz=512 attr=2, projid32bit=1= crc=1 finobt=1, sparse=1, rmapbt=0= reflink=1 bigtime=1 inobtcount=1
data = bsize=4096 blocks=3929856, imaxpct=25= sunit=1 swidth=1 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2= sectsz=512 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3929856 to 6551296
查看是否扩容生效
[root@ip-127-0-0-1 data]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 25G 1.9G 24G 8% /data/raid-storge