Ceph分布式存储方案
Ceph分布式存储方案
该案例最小实验环境
主机名 | IP地址 | 配置,额外添加一块10G硬盘 |
---|---|---|
ceph1 | 192.168.3.160 | 1CPU,1Cores,1GiB,20GiB+20GiB |
ceph2 | 192.168.3.161 | 1CPU,1Cores,1GiB,20GiB+20GiB |
ceph3 | 192.168.3.162 | 1CPU,1Cores,1GiB,20GiB+20GiB |
client | 192.168.3.50 | ceph客户端,使用ceph的人 |
系统设置
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld# 关闭selinux
sed -ri "s,^(SELINUX=).*,\1disabled," /etc/selinux/config # 永久
setenforce 0 # 临时
该文档默认使用Ansibel
[root@localhost ceph]# cat inventory
[ceph]
ceph1 ansible_host=192.168.3.160
ceph2 ansible_host=192.168.3.161
ceph3 ansible_host=192.168.3.162[all:vars]
ansible_ssh_user=root
ansible_ssh_pass=root
配置域名解析
---
- name: modify hostshosts: alltasks:- name: add hostsblockinfile:path: /etc/hostsblock: |192.168.3.160 ceph1192.168.3.161 ceph2192.168.3.162 ceph3
配置时间同步
---
- name: install chronyhosts: alltasks:- name: 安装时间服务yum:name: chronystate: present- name: 配置masterblock:- name: 允需这个网段的人来找我对时lineinfile:path: /etc/chrony.confregexp: '^allow 'line: 'allow 192.168.3.0/24'- name: 如果我的时间不准,也告诉别人。错成一致就可以lineinfile:path: /etc/chrony.confregexp: '^local stratum 'line: 'local stratum 10'when: inventory_hostname == 'ceph1'- name: 配置nodeblock:- name: 修改配置文件lineinfile:path: /etc/chrony.confregexp: '^pool 'line: 'pool 192.168.3.160 iburst'when: inventory_hostname != 'ceph1'- name: 启动时间服务systemd:name: chronydenabled: yesstate: restarted
安装docker,使用的阿里云镜像源
---
- name: 安装Docker相关依赖hosts: alltasks:- name: 安装yum-utilsansible.builtin.yum:name: yum-utilsstate: present- name: 添加阿里云仓库ansible.builtin.command:cmd: yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoargs:creates: /etc/yum.repos.d/docker-ce.repo- name: 安装软件ansible.builtin.yum:name:- docker-ce- docker-ce-cli- containerd.io- docker-buildx-plugin- docker-compose-plugin- python3- lvm2state: present- name: 创建Docker配置目录ansible.builtin.file:path: /etc/dockerstate: directorymode: '0755'- name: 创建空的daemon.json文件ansible.builtin.file:path: /etc/docker/daemon.jsonstate: touchmode: '0644'- name: 配置Docker镜像加速ansible.builtin.copy:dest: /etc/docker/daemon.jsoncontent: |{"registry-mirrors": ["https://pacgppmp.mirror.aliyuncs.com"],"live-restore": true}- name: 启动并启用Docker服务ansible.builtin.systemd:name: dockerstate: startedenabled: true- name: 验证Docker版本ansible.builtin.shell: docker --versionregister: docker_versionchanged_when: false- name: 显示Docker版本信息ansible.builtin.debug:var: docker_version.stdout
备注:blockinfile会在文件中加注释,copy不会
安装ceph
curl -o cephadm https://download.ceph.com/rpm-18.2.1/el8/noarch/cephadm
chmod +x cephadm #赋予执行权限
在ceph1上初始化ceph集群
./cephadm bootstrap --mon-ip 192.168.3.160 --initial-dashboard-password=123456 --dashboard-password-noupdate
成功的提示
Ceph Dashboard is now available at:URL: https://ceph1:8443/User: adminPassword: 123456
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/4a139fb0-45c0-11f0-b820-000c2989a82e/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:sudo ./cephadm shell --fsid 4a139fb0-45c0-11f0-b820-000c2989a82e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:sudo ./cephadm shell
Please consider enabling telemetry to help improve Ceph:ceph telemetry on
For more information see:https://docs.ceph.com/en/latest/mgr/telemetry/
Bootstrap complete.
添加节点
拷贝密钥到文件到其它节点
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph2
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph3
[root@ceph1 ~]# ./cephadm shell #进入容器[ceph: root@ceph1 /]# ceph orch host add ceph2 192.168.3.161 #添加其它节点
Added host 'ceph2' with addr '192.168.3.161'
[ceph: root@ceph1 /]# ceph orch host add ceph3 192.168.3.162 #添加其它节点
Added host 'ceph3' with addr '192.168.3.162'
[ceph: root@ceph1 /]# ceph orch host ls #查看集群中的所有节点
HOST ADDR LABELS STATUS
ceph1 192.168.3.160 _admin
ceph2 192.168.3.161
ceph3 192.168.3.162
3 hosts in cluster
部署集群监视器(奇数台,防止脑裂,一般3台起步)
ceph orch apply mon --placement="3 ceph1 ceph2 ceph3"
常用命令
./cephadm shell #进入容器
ceph orch host add ceph2 192.168.3.202 #添加其它节点
ceph orch host ls #查看集群中的所有节点
ceph orch apply mon --placement="3 ceph1 ceph2 ceph3" #部署监控服务
ceph mon stat #查看监控服务状态
ceph orch ls #查看所有容器
ceph orch daemon add osd ceph1:/dev/nvme0n2 #用物理磁盘创建并启动一个 OSD 服务实例
ceph -s #查看集群状态
块存储
提供像硬盘一样的设备,(分区-格式化-挂载)
OSD(硬盘) --- 存储池(容器) --- rbd --- 镜像 --- 客户(分区 -- 格式化 -- 挂载)
[ceph: root@ceph1 /]# ceph osd lspools #查看默认存储池
1 .mgr #用于 Ceph 管理器(Mgr)服务存储元数据
[ceph: root@ceph1 /]# ceph df #查看存储详细使用情况
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 60 GiB 60 GiB 81 MiB 81 MiB 0.13
TOTAL 60 GiB 60 GiB 81 MiB 81 MiB 0.13
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 19 GiB
[ceph: root@ceph1 /]# ceph osd pool get .mgr size #查看.mgr存储池副本数量
size: 3 #默认3副本
创建一个名为rbd的存储池(默认的存储池就是rbd)
PG 数量 = (OSD 数量 × 副本数) / 3
[ceph: root@ceph1 /]# ceph osd pool create rbd 3
pool 'rbd' created
[ceph: root@ceph1 /]# ceph osd pool application enable rbd rbd
enabled application 'rbd' on pool 'rbd'
镜像
[ceph: root@ceph1 /]# rbd create img1 --size 10G #创建名为img1的镜像,大小为10GB
[ceph: root@ceph1 /]# rbd ls #查看镜像
img1
[ceph: root@ceph1 /]# rbd info --pool rbd img1 #查看rbd存储池,img1镜像的详细信息
rbd image 'img1':size 10 GiB in 2560 objectsorder 22 (4 MiB objects)snapshot_count: 0id: 5e4cba182925block_name_prefix: rbd_data.5e4cba182925format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenop_features:flags:create_timestamp: Tue Jun 10 07:20:12 2025access_timestamp: Tue Jun 10 07:20:12 2025modify_timestamp: Tue Jun 10 07:20:12 2025
镜像扩容
rbd resize img1 --size 15G #是扩大到15,不是再增加15
ceph客户端
配置yum,使用清华大学开源镜像站
yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
cat >> /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name = Rocky Linux - ceph
baseurl = https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-18.1.0/el8/x86_64
gpgcheck = 1
gpgkey = https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
EOF
yum -y install ceph-common
将ceph1上的配置文件和密钥keyring文件拷贝给客户端
[root@ceph1 ~]# scp /etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.conf 192.168.3.50:/etc/ceph/
[root@localhost ~]# ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap
在客户端验证是否可以操作ceph
[root@localhost ~]# rbd ls
img1
[root@localhost ~]# rbd map img1 #将ceph镜像映射到本地硬盘
/dev/rbd0
[root@localhost ~]# rbd showmapped #确认映射状态
id pool namespace image snap device
0 rbd img1 - /dev/rbd0
格式化
mkfs.ext4 /dev/rbd0
创建挂载目录,挂载
[root@localhost ~]# mkdir /data
[root@localhost ~]# mount /dev/rbd0 /data/
[root@localhost ~]# df -h /data/
文件系统 容量 已用 可用 已用% 挂载点
/dev/rbd0 4.9G 24K 4.6G 1% /data
开机自动挂载
cat >> /etc/fstab <<EOF
/dev/rbd0 /data ext4 defaults 0 0
EOF
生成一个2G大小的文件测试效果
dd if=/dev/zero of=/data/2G_file bs=1G count=2
删除
rbd status img1 #查看状态
umount /dev/rbd0 #卸载
rbd unmap img1
rbd rm img1