RAID磁盘阵列
目的:提升磁盘读写性能,提高数据的可靠性(备份、冗余)
常见的radi
方案:raid0
,raid1
,raid5
,raid10
raid0
:最少两块盘,大小一致,数据分割为若干份,向对应的磁盘进行读写操作,每一块盘写二分之一
raid1
:最少两块盘,大小一致,数据拷贝若干份,也叫镜像盘,每块磁盘都拥有完整的数据
raid10
:最少四块潘,先两两做raid1
,再做raid0
raid5
:最少四块盘,加入了奇偶校验, 当某一块盘出现故障后,可以通过奇偶校验的方法来找回数据
raid0
先对虚拟机添加两块磁盘
![image-20211003120948147](C:\Users\Jiang Wenbo\AppData\Roaming\Typora\typora-user-images\image-20211003120948147.png)
然后安装磁盘管理工具,并通过mdadm -Ds
查看当前的raid
,没有显示就证明当前没有raid
yum install mdadm
[root@localhost ~]# mdadm -Ds
[root@localhost ~]#
查看当前所有磁盘
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 1G 0 disk
sdc 8:32 0 1G 0 disk
sr0 11:0 1 4.2G 0 rom
[root@localhost ~]#
然后使用新添加的硬盘(sdb,sdc)
来创建raid0
#创建raid、格式化、挂载(开机永久挂载 、autofs)
[root@localhost ~]# mdadm -Cv /dev/md0 -l 0 -n 2 /dev/sd[b-c]
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]#
命令解析
-C:创建设备
-v:显示信息
-l 0:
RAID
的等级为RAID0
-n 2:创建
RAID
的设备为2
块
使用mdadm -Ds
查看是否创建成功,使用lsblk
查看当前磁盘
[root@localhost ~]# mdadm -Ds
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=9747d0e6:2e27a138:d1d9ed7a:dd519c72
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 1G 0 disk
└─md0 9:0 0 2G 0 raid0
sdc 8:32 0 1G 0 disk
└─md0 9:0 0 2G 0 raid0
sr0 11:0 1 4.2G 0 rom
[root@localhost ~]#
也可以使用mdadm -D /dev/md0
来查看/md0
的磁盘成员
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Oct 3 00:17:40 2021
Raid Level : raid0
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Oct 3 00:17:40 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 9747d0e6:2e27a138:d1d9ed7a:dd519c72
Events : 0
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
[root@localhost ~]#
格式化挂载md0并挂载到raid0目录
[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=512 agcount=8, agsize=65408 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=523264, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# mkdir -p /raid0
[root@localhost ~]# mount /dev/md0 /raid0/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1.1G 16G 7% /
devtmpfs 476M 0 476M 0% /dev
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 488M 7.7M 480M 2% /run
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md0 2.0G 33M 2.0G 2% /raid0
[root@localhost ~]#
设置开机自动挂载
[root@localhost ~]# blkid /dev/md0
/dev/md0: UUID="88b47199-b888-4827-b0c4-27142f2edebd" TYPE="xfs"
[root@localhost ~]#echo "UUID=88b47199-b888-4827-b0c4-27142f2edebd /raid0 xfs defaults 0 0" >> /etc/fstab
[root@localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Oct 1 16:57:00 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=3ad51c4e-c384-4b6a-8d69-c86a5e849449 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
UUID=88b47199-b888-4827-b
删除raid
操作
[root@localhost ~]# umount /raid0/ //取消挂载/raid0
[root@localhost ~]# mdadm -S /dev/md0 //-S 停掉md0
mdadm: stopped /dev/md0
[root@localhost ~]# rm -rf /raid0/ //删除挂载目录
[root@localhost ~]# mdadm --zero-superblock /dev/sdb //清空成员
[root@localhost ~]# mdadm --zero-superblock /dev/sdc //清空成员
[root@localhost ~]# vim /etc/fstab
UUID=88b47199-b888-4827-b0c4-27142f2edebd /raid0 xfs defaults 0 0
删除掉我们刚才新添加的这行挂载目录
RAID5
新添加4
块1G
的硬盘
![image-20211003165508032](C:\Users\Jiang Wenbo\AppData\Roaming\Typora\typora-user-images\image-20211003165508032.png)
创建raid5
,用三个盘来模拟数据盘,第四块盘用来做热备盘
[root@localhost ~]# mdadm -Cv /dev/md5 -l 5 -n 3 /dev/sd[b-d] --spare-device=1 /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 1046528K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Oct 3 05:01:05 2021
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Oct 3 05:01:11 2021
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : fc354fd7:c839fb99:98b2cea5:532edc0d
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
3 8 64 - spare /dev/sde
[root@localhost ~]#
模拟故障盘
[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Oct 3 05:01:05 2021
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Oct 3 05:03:28 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : fc354fd7:c839fb99:98b2cea5:532edc0d
Events : 37
Number Major Minor RaidDevice State
3 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
0 8 16 - faulty /dev/sdb
[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Oct 3 05:01:05 2021
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Oct 3 05:07:29 2021
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : fc354fd7:c839fb99:98b2cea5:532edc0d
Events : 40
Number Major Minor RaidDevice State
3 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
- 0 0 2 removed
4 8 48 - faulty /dev/sdd
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 1G 0 disk
sdc 8:32 0 1G 0 disk
└─md5 9:5 0 2G 0 raid5
sdd 8:48 0 1G 0 disk
└─md5 9:5 0 2G 0 raid5
sde 8:64 0 1G 0 disk
└─md5 9:5 0 2G 0 raid5
sr0 11:0 1 4.2G 0 rom /opt/centos
[root@localhost ~]#
[root@localhost ~]# mdadm -a /dev/md5 /dev/sdb
mdadm: added /dev/sdb
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Oct 3 05:01:05 2021
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Oct 3 05:08:48 2021
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 80% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : fc354fd7:c839fb99:98b2cea5:532edc0d
Events : 54
Number Major Minor RaidDevice State
3 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
5 8 16 2 spare rebuilding /dev/sdb
4 8 48 - faulty /dev/sdd