OpenStack 学习笔记(三):存储与计算核心组件管理实践
OpenStack管理
OpenStack块存储管理-cinder
cinder的配置文件位置:/etc/cinder
cinder的日志文件位置:/var/log/cidner
[root@controller ~]# cd /etc/cinder/
[root@controller cinder]# ls
api-paste.ini cinder.conf resource_filters.json rootwrap.conf rootwrap.d volumes[root@controller cinder]# cd /var/log/cinder/
[root@controller cinder]# ls
api.log backup.log cinder-manage.log scheduler.log volume.log
openstack存储类型
Cinder作用
Cinder在虚拟机与具体存储设备之间引入了一层“逻辑存储卷”的抽象,Cinder本身不是一种存储技术,并没有实现对块设备的实际管理和服务
Cinder只是提供了一个中间的抽象层,为后端不同的存储技术,提供了统一的接口
不同的块设备服务厂商在Cinder中以驱动的形式实现上述接口与OpenStack进行整合
Cinder架构
Cinder架构
查看controller节点cinder进程:
[root@controller ~]# ps -e | grep cinder10529 ? 00:00:07 cinder-api10531 ? 00:00:06 cinder-backup10539 ? 00:00:05 cinder-schedule10551 ? 00:00:01 cinder-backup10558 ? 00:00:01 cinder-api10559 ? 00:00:00 cinder-api10560 ? 00:00:00 cinder-api10561 ? 00:00:00 cinder-api10582 ? 00:00:06 cinder-volume10595 ? 00:00:02 cinder-volume
查看cinder-volume默认支持的存储设备:
[root@controller ~]# cd /usr/lib/python3.6/site-packages/cinder/volume/drivers/
[root@controller drivers]# ls
datera ibm lvm.py pure.py sandstone vmware
dell_emc infinidat.py macrosan __pycache__ solidfire.py vzstorage.py
fujitsu infortrend nec qnap.py spdk.py windows
fusionstorage __init__.py netapp quobyte.py storpool.py zadara.py
hedvig inspur nexenta rbd.py stx
hitachi kaminario nfs.py remotefs.py synology
hpe lenovo nimble.py rsd.py veritas_access
huawei linstordrv.py prophetstor san veritas_cnfs.py
Cinder架构说明
[root@controller ~]# vgdisplay cinder-volumesConfiguration setting "snapshot_autoextend_percent" invalid. It's not part of any section.Configuration setting "snapshot_autoextend_threshold" invalid. It's not part of any section.--- Volume group ---VG Name cinder-volumesSystem IDFormat lvm2Metadata Areas 1Metadata Sequence No 6VG Access read/writeVG Status resizableMAX LV 0Cur LV 2Open LV 0Max PV 0Cur PV 1Act PV 1VG Size <20.60 GiBPE Size 4.00 MiBTotal PE 5273Alloc PE / Size 5020 / <19.61 GiBFree PE / Size 253 / 1012.00 MiBVG UUID G5HKmb-34F6-57AN-3AT5-2GwJ-2RUg-1Ow8N5
Cinder架构部署:以SAN存储为例
- Cinder-api,Cinder-Scheduler,Cinder-Volume可以选择部署到一个节点上,也可以分别部署
- API采用AA模式,HAproxy作为LB,分发请求到多个Cinder API
- Scheduler也采用AA模式,由rabbitmq以负载均衡模式向3个节点分发任务,并同时从rabbitmq收取Cinder volume上报的能力信息,调度时,scheduler通过在DB中预留资源从而保证数据一致性
- Cinder Volume也采用AA模式,同时上报同一个backend容量和能力信息,并同时接受请求进行处理
- RabbitMQ,支持主备或集群
- MySQL,支持主备或集群
Cinder-api
Cinder API
- 检查参数合法性(用户输入,权限,资源是否存在等)
- 准备创建的参数字典,预留和提交配额
- 在数据库中创建对应的数据记录
- 通过消息队列将请求和参数发送到Scheduler
查看cinder API服务状态:
[root@controller ~]# systemctl status openstack-cinder-api.service
● openstack-cinder-api.service - OpenStack Cinder API ServerLoaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor pres>Active: active (running) since Wed 2025-09-17 09:44:19 CST; 10min agoMain PID: 10529 (cinder-api)Tasks: 5 (limit: 23002)Memory: 266.9MCGroup: /system.slice/openstack-cinder-api.service├─10529 /usr/bin/python3 /usr/bin/cinder-api --config-file /usr/share/cinder/cinde>├─10558 /usr/bin/python3 /usr/bin/cinder-api --config-file /usr/share/cinder/cinde>├─10559 /usr/bin/python3 /usr/bin/cinder-api --config-file /usr/share/cinder/cinde>├─10560 /usr/bin/python3 /usr/bin/cinder-api --config-file /usr/share/cinder/cinde>└─10561 /usr/bin/python3 /usr/bin/cinder-api --config-file /usr/share/cinder/cinde>Sep 17 09:44:19 controller systemd[1]: Started OpenStack Cinder API Server.
Cinder-scheduler
•和Nova Scheduler类似,Cinder Scheduler也是经过Filter筛选符合条件的后端,然后使用Weigher计算后端进行权重排序,最终选择出最合适的后端存储。
查看配置文件:
[root@controller ~]# cd /etc/cinder/
[root@controller cinder]# ls
api-paste.ini cinder.conf resource_filters.json rootwrap.conf rootwrap.d volumes
[root@controller cinder]# vim cinder.conf592 #scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter593595 #scheduler_default_weighers = CapacityWeigher596601 # Default scheduler driver to use (string value)602 #scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
AvailabilityZoneFilter
可以将存储节点和计算节点划分到不同的 Availability Zone 中。
创建 Volume 时,用户会指定 Volume 的大小。CapacityFilter 的作用是将存储空间不能满足 Volume 创建需求的存储节点过滤掉。
实验:
配置cinder配置文件cinder.conf
[root@controller ~]# cd /etc/cinder/
[root@controller cinder]# ls
api-paste.ini cinder.conf resource_filters.json rootwrap.conf rootwrap.d volumes
[root@controller cinder]# vim cinder.conf
#该配置文件的节点的AZ设置为az1
395 storage_availability_zone=az1
#创建卷时,不指定az默认使用nova AZ
401 default_availability_zone=nova[root@controller cinder]# systemctl restart openstack-cinder*
验证上面的实验
#cinder.conf配置文件配置401 default_availability_zone=nova,创建卷时不指定AZ则使用nova AZ,而该节点属于az1 AZ,创建失败
[root@controller ~]# source keystonerc_admin
[root@controller ~(keystone_admin)]# openstack volume create --size 1 volume1
Availability zone 'nova' is invalid. (HTTP 400) (Request-ID: req-08a53975-36cb-4f90-a023-4b41c45ae660)#cinder配置文件配置395 storage_availability_zone=az1,该节点属于AZ az1节点,创建AZ az2卷,被AvailabilityZoneFilter过滤
[root@controller ~(keystone_admin)]# openstack volume create --size 1 --availability-zone az2 volume1
Availability zone 'az2' is invalid. (HTTP 400) (Request-ID: req-0e67d9ee-cdeb-451e-b39d-633ea26ae828)#cinder配置文件配置395 storage_availability_zone=az1,该节点属于AZ az1节点,创建AZ az1卷,创建成功
[root@controller ~(keystone_admin)]# openstack volume create --size 1 --availability-zone az1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | az1 |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2025-09-17T01:48:02.438752 |
| description | None |
| encrypted | False |
| id | ab2ec798-5a3b-4a24-b0d9-492ed266e102 |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | iscsi |
| updated_at | None |
| user_id | af704f24dc304c09a051f19c9f4d4efe |
+---------------------+--------------------------------------+
CapacityFilter
创建 Volume 时,用户会指定 Volume 的大小。CapacityFilter 的作用是将存储空间不能满足 Volume 创建需求的存储节点过滤掉。
CapabilitiesFilter
不同的 Volume Provider 有自己的特性(Capabilities),比如是否支持 thin provision 等。Cinder 允许用户创建 Volume 时通过 Volume Type 指定需要的 Capabilities。
OpenStack对象存储-Swift
swift在openstack中的作用
- Swift并不是文件系统或者实时的数据存储系统,它称为对象存储,用于永久类型的静态数据的长期存储,这些数据可以检索、调整,必要时进行更新
- 最适合存储的数据类型的例子是虚拟机镜像、图片存储、邮件存储和存档备份
- 因为没有中心单元或主控结点,Swift提供了更强的扩展性、冗余和持久性
使用命令swift stat可以显示Swift中的帐户、容器和对象的信息。
Swift为帐户,容器和对象分别定义了Ring(环)将虚拟节点(分区)映射到一组物理存储设备上,包括Account Ring、 Container Ring 、 Object Ring。
Ring记录了存储对象与物理位置的映射关系,通过Zone、 Device、 Partition和Replica来维护映射信息。
华为OBS体验
去华为云官网下载OBS Browser+
Swift实验
控制节点新添加一块20GB磁盘
一。新添磁盘分成两个区,并格式化
分区一:挂载到obs1目录
分区二:挂载到obs2目录
新建两个分区
[root@controller ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 2G 0 loop /srv/node/swiftloopback
loop1 7:1 0 20.6G 0 loop
├─cinder--volumes-cinder--volumes--pool_tmeta
│ 253:3 0 20M 0 lvm
│ └─cinder--volumes-cinder--volumes--pool
│ 253:5 0 19.6G 0 lvm
└─cinder--volumes-cinder--volumes--pool_tdata253:4 0 19.6G 0 lvm└─cinder--volumes-cinder--volumes--pool253:5 0 19.6G 0 lvm
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part├─cs-root 253:0 0 70G 0 lvm /├─cs-swap 253:1 0 3.9G 0 lvm [SWAP]└─cs-home 253:2 0 125.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
sr0 11:0 1 12.8G 0 rom
将两个分区格式化为xfs格式
下面为默认挂给swift的虚拟设备分区,将其卸载
# 卸载原来的swift虚拟设备分区
[root@controller ~]# umount /srv/node/swiftloopback
[root@controller ~]# cd /srv/node/
[root@controller node]# ls
swiftloopback# 删除原来的swift挂载目录
[root@controller node]# rm -rf swiftloopback/# 创建新的挂载目录,分别挂载sdb1,sbd2
[root@controller node]# mkdir obs1 obs2
配置挂载文件,将obs1–sdb1,obs2–sdb2分别挂载关联
[root@controller node]# vim /etc/fstab
# 具体配置如下图[root@controller node]# mount -a
mount: (hint) your fstab has been modified, but systemd still usesthe old version; use 'systemctl daemon-reload' to reload.
[root@controller node]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 3904636 0 3904636 0% /dev
tmpfs 3924840 4 3924836 1% /dev/shm
tmpfs 3924840 17620 3907220 1% /run
tmpfs 3924840 0 3924840 0% /sys/fs/cgroup
/dev/mapper/cs-root 73364480 7047288 66317192 10% /
/dev/mapper/cs-home 131081692 946964 130134728 1% /home
/dev/sda1 1038336 234200 804136 23% /boot
tmpfs 784968 0 784968 0% /run/user/0
/dev/sdb1 10475520 106088 10369432 2% /srv/node/obs1
/dev/sdb2 10474496 106088 10368408 2% /srv/node/obs2
修改obs1目录和obs2目录权限
[root@controller node]# chown swift:swift obs1
[root@controller node]# chown swift:swift obs2
[root@controller node]# ll
total 0
drwxr-xr-x 2 swift swift 6 Sep 17 15:13 obs1
drwxr-xr-x 2 swift swift 6 Sep 17 15:13 obs2
创建swfit ring:
[root@controller node]# cd /etc/swift/
[root@controller swift]# ls
account.builder container.ring.gz object-server
account.ring.gz container-server object-server.conf
account-server container-server.conf proxy-server
account-server.conf internal-client.conf proxy-server.conf
backups object.builder swift.conf
container.builder object-expirer.conf
container-reconciler.conf object.ring.gz
[root@controller swift]# swift-ring-builder container.builder create 12 2 1
[root@controller swift]# swift-ring-builder account.builder create 12 2 1
[root@controller swift]# swift-ring-builder object.builder create 12 2 1
#12表示ring分区数量为2^12
#2表示2个副本
#1表示最少1个小时后才能更改ring配置
创建ring映射关系:
[root@controller swift]# cat account-server.conf | grep bind_port
bind_port = 6002
[root@controller swift]# cat container-server.conf | grep bind_port
bind_port = 6001
[root@controller swift]# cat object-server.conf | grep bind_port
bind_port = 6000[root@controller swift]# swift-ring-builder account.builder add z1-192.168.108.10:6002/obs1 100
WARNING: No region specified for z1-192.168.108.10:6002/obs1. Defaulting to region 1.
Device d0r1z1-192.168.108.10:6002R192.168.108.10:6002/obs1_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder account.builder add z2-192.168.108.10:6002/obs2 100
WARNING: No region specified for z2-192.168.108.10:6002/obs2. Defaulting to region 1.
Device d1r1z2-192.168.108.10:6002R192.168.108.10:6002/obs2_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder container.builder add z1-192.168.108.10:6001/obs1 100
WARNING: No region specified for z1-192.168.108.10:6001/obs1. Defaulting to region 1.
Device d0r1z1-192.168.108.10:6001R192.168.108.10:6001/obs1_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder container.builder add z2-192.168.108.10:6001/obs2 100
WARNING: No region specified for z2-192.168.108.10:6001/obs2. Defaulting to region 1.
Device d1r1z2-192.168.108.10:6001R192.168.108.10:6001/obs2_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder object.builder add z1-192.168.108.10:6000/obs1 100
WARNING: No region specified for z1-192.168.108.10:6000/obs1. Defaulting to region 1.
Device d0r1z1-192.168.108.10:6000R192.168.108.10:6000/obs1_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder object.builder add z2-192.168.108.10:6000/obs2 100
WARNING: No region specified for z2-192.168.108.10:6000/obs2. Defaulting to region 1.
Device d1r1z2-192.168.108.10:6000R192.168.108.10:6000/obs2_"" with 100.0 weight got id 1
[root@controller swift]#
再平衡:
[root@controller swift]# swift-ring-builder account.builder rebalance
Reassigned 8192 (200.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]# swift-ring-builder object.builder rebalance
Reassigned 8192 (200.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]# swift-ring-builder container.builder rebalance
Reassigned 8192 (200.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
测试:
[root@controller etc]# cd /srv/node/obs1# #刚才上传的文件,2副本,分别在obs1目录,obs2目录各存了一份
[root@controller obs1]# find /srv/node/ -name *data
/srv/node/obs1/objects/3664/e7c/e503e7e65052b86208a8d3c100dc1e7c/1758094061.54153.data
/srv/node/obs2/objects/3664/e7c/e503e7e65052b86208a8d3c100dc1e7c/1758094061.54153.data
OpenStack计算管理
实验介绍
本实验主要介绍如何通过OpenStack Dashboard和OpenStack CLI两种方式管理Hypervisor、主机聚合、规格、密钥对以及虚拟机组,最后介绍了虚拟机实例的发放、生命周期管理以及快照和重建等基本操作。
实验流程
OpenStack Dashboard操作
Hypervisor和主机聚合管理
使用admin用户登录OpenStack Dashboard界面,在左侧导航栏,选择“管理员>计算>虚拟机管理器”,进入虚拟机管理器概述列表,查看虚拟机管理概览、计算节点等信息
在左侧导航栏,选择“管理员 > 计算> 主机聚合”,进入主机聚合列表,单击页面右上方的“创建主机聚合”
弹出创建主机聚合对话框,在“主机聚合信息”页签,输入主机聚合名称“HostAggr_web”和可用分区名称“nova”
选择“管理聚合内的主机”页签,单击左侧可用主机“controller”后面的 ,右侧将显示选择的主机,单击“创建主机聚合”,完成主机聚合的创建
验证:
1、创建主机聚合“HostAggr_web_test”,可用分区设置为“AZ_web”,添加主机“controller”,是否能成功,即同一个主机是否可以加入不同的AZ
2、单击主机聚合“HostAggr_web_test”所在行“Actions”列的“编辑主机聚合”,将可用分区设置为“nova”,单击“提交”。单击主机聚合“HostAggr_web_test”所在行“Actions”列方框中的 ,在操作列表中选择“管理主机”,添加主机“controller”,是否能成功,即同一个主机是否可以加入不同的主机聚合。
单击待操作的主机聚合“HostAggr_web_test”所在行“Actions”列方框中的,在操作列表中选择“管理主机”
弹出从主机聚合中添加/移除主机的对话框,单击右侧主机后面的,主机将显示在左侧可用主机列表中,单击“保存”
返回主机聚合列表,勾选主机聚合 “HostAggr_web_test”,单击“删除主机聚合”
实例类型(规格)管理
在左侧导航栏,选择“管理员>计算>实例类型”,进入规格列表,单击页面右上方的“创建实例类型”
创建规格“Flavor_web_test”时未选择任何项目,则默认该规格对所有项目可用,即该规格是“Public”的
验证:
1、规格“Flavor_web”创建完成后,若移除选择的项目,是否表示该规格为“Public”?
结论:
- 规格“Flavor_web”仍为“Private”,说明移除选择的所有项目,仅表示该规格不对任何项目可用(具有admin角色的用户能看到该规格,但无法使用该规格,其他用户既无法看到该规格也无法使用该规格),而非“Public”。
- 规格创建后,不支持变更(包括变更为“Public”和“Private”),若需要变更规格,只能重新创建新的规格。
在待操作的规格所在行,在操作列表中选择“删除实例类型”,删除该规格。
密钥和虚拟机组管理
在左侧导航栏,选择“项目>计算>密钥对”,进入密钥对列表,单击页面右上方的“创建密钥对”
打开下载的密钥对
在左侧导航栏,选择“项目>计算>主机组”,进入虚拟机组列表,单击页面右上方“创建服务器组”
有如下4种策略类型:
关联:亲和性,同一个虚拟机组中的虚拟机必须共存在同一台主机上。
不关联:反亲和性,同一个虚拟机组中的虚拟机不能共存在同一台主机上。
软不关联:弱反亲和性,若组内虚拟机调度的主机资源充足,则与反亲和性策略保持一致;若组内虚拟机调度的主机资源不足,则自动忽略弱反亲和性规则。
软关联:弱亲和性,若组内虚拟机调度的主机资源充足,则与亲和性策略保持一致;若组内虚拟机调度的主机资源不足,则自动忽略弱亲和性规则。
创建网络
创建shared网络
虚拟机实例操作
发放虚拟机实例
使用admin用户登录OpenStack Dashboard界面,在左侧导航栏,选择“项目>计算> 实例”,进入虚拟机实例列表,单击页面上方的“创建实例”
虚拟机实例开启、关闭与重启
关闭虚拟机实例
启动实例
软重启实例和硬重启实例
虚拟机实例锁定、解锁
虚拟机实例锁定、解锁
虚拟机实例暂停、挂起和恢复
暂停实例
恢复实例
挂起实例
恢复实例
虚拟机实例废弃(搁置)和取消废弃(解搁置)
废弃实例
取消废弃实例
虚拟机实例快照
创建快照
OpenStack CLI操作
Hypervisor、主机聚合和可用分区管理
远程登录controller。
[root@controller ~]# source keystonerc_admin
查看OpenStack Hypervisor的列表
[root@controller ~(keystone_admin)]# openstack hypervisor list --long
+----+---------------------+-----------------+----------------+-------+------------+-------+----------------+-----------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB |
+----+---------------------+-----------------+----------------+-------+------------+-------+----------------+-----------+
| 1 | controller | QEMU | 192.168.108.10 | up | 1 | 4 | 640 | 7665 |
| 2 | compute | QEMU | 192.168.108.11 | up | 0 | 4 | 512 | 3633 |
+----+---------------------+-----------------+----------------+-------+------------+-------+----------------+-----------+
查看OpenStack主机的列表
[root@controller ~(keystone_admin)]# openstack host list
+------------+-----------+----------+
| Host Name | Service | Zone |
+------------+-----------+----------+
| controller | conductor | internal |
| controller | scheduler | internal |
| controller | compute | nova |
| compute | compute | nova |
+------------+-----------+----------+
创建主机聚合“HostAggr_cli”
[root@controller ~(keystone_admin)]# openstack aggregate create --zone nova HostAggr_cli
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2025-09-17T10:36:47.715152 |
| deleted | False |
| deleted_at | None |
| hosts | None |
| id | 4 |
| name | HostAggr_cli |
| properties | None |
| updated_at | None |
+-------------------+----------------------------+
为主机聚合“HostAggr_cli”添加主机“controller”
[root@controller ~(keystone_admin)]# openstack aggregate add host HostAggr_cli controller
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2025-09-17T10:36:47.000000 |
| deleted | False |
| deleted_at | None |
| hosts | controller |
| id | 4 |
| name | HostAggr_cli |
| properties | availability_zone='nova' |
| updated_at | None |
+-------------------+----------------------------+
验证同一个主机是否可以加入不同的AZ
[root@controller ~(keystone_admin)]# openstack aggregate create --zone AZ_cli HostAggr_cli_test
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | AZ_cli |
| created_at | 2025-09-17T10:38:38.839616 |
| deleted | False |
| deleted_at | None |
| hosts | None |
| id | 5 |
| name | HostAggr_cli_test |
| properties | None |
| updated_at | None |
+-------------------+----------------------------+
[root@controller ~(keystone_admin)]# openstack aggregate add host HostAggr_cli_test controller
Cannot add host to aggregate 5. Reason: One or more hosts already in availability zone(s) ['nova', 'nova']. (HTTP 409) (Request-ID: req-c8a80866-cc0b-4218-bbb4-c9450e6c5327)
验证同一个主机是否可以加入不同的主机聚合
[root@controller ~(keystone_admin)]# openstack aggregate set --zone nova HostAggr_cli_test
[root@controller ~(keystone_admin)]# openstack aggregate show HostAggr_cli_test
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2025-09-17T10:38:38.000000 |
| deleted | False |
| deleted_at | None |
| hosts | |
| id | 5 |
| name | HostAggr_cli_test |
| properties | |
| updated_at | None |
+-------------------+----------------------------+
[root@controller ~(keystone_admin)]# openstack aggregate add host HostAggr_cli_test controller
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2025-09-17T10:38:38.000000 |
| deleted | False |
| deleted_at | None |
| hosts | controller |
| id | 5 |
| name | HostAggr_cli_test |
| properties | availability_zone='nova' |
| updated_at | None |
+-------------------+----------------------------+
为主机聚合“HostAggr_cli_test”移除主机“controller”
[root@controller ~(keystone_admin)]# openstack aggregate remove host HostAggr_cli_test controller
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2025-09-17T10:38:38.000000 |
| deleted | False |
| deleted_at | None |
| hosts | |
| id | 5 |
| name | HostAggr_cli_test |
| properties | availability_zone='nova' |
| updated_at | None |
+-------------------+----------------------------+
删除主机聚合“HostAggr_cli_test”
[root@controller ~(keystone_admin)]# openstack aggregate delete HostAggr_cli_test
查看主机聚合列表
[root@controller ~(keystone_admin)]# openstack aggregate list
+----+--------------+-------------------+
| ID | Name | Availability Zone |
+----+--------------+-------------------+
| 1 | HostAggr_web | nova |
| 4 | HostAggr_cli | nova |
+----+--------------+-------------------+
规格管理
步骤 1 创建规格“Flavor_cli”,要求设置如下:
VCPUs:规格的VCPU数量,如“1”。
RAM (MB):规格的RAM大小,如“128”。
Root Disk (GB):规格的根磁盘大小,如“1”。
该规格仅对项目“Project_cli”可见。
其他保持默认。
[root@controller ~(keystone_admin)]# openstack flavor create --vcpus 1 --ram 128 --disk 1 --private --project Project_cli Flavor_cli
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 9556fa00-b9b2-4567-9463-09334db17601 |
| name | Flavor_cli |
| os-flavor-access:is_public | False |
| properties | |
| ram | 128 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
移除规格“Flavor_cli”对项目“Project_cli”可见
[root@controller ~(keystone_admin)]# openstack flavor unset --project Project_cli Flavor_cli
查看规格“Flavor_cli”的详细信息
[root@controller ~(keystone_admin)]# openstack flavor show Flavor_cli
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | |
| disk | 1 |
| id | 9556fa00-b9b2-4567-9463-09334db17601 |
| name | Flavor_cli |
| os-flavor-access:is_public | False |
| properties | |
| ram | 128 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+# 规格“Flavor_cli”仍不是“Public”。若要变更规格为“Public”,可以先删除原先的规格,再创建新的规格。
删除规格“Flavor_cli”
[root@controller ~(keystone_admin)]# openstack flavor delete Flavor_cli
步骤 1 创建规格“Flavor_cli”, 要求配置如下:
VCPUs:规格的VCPU数量,如“1”。
RAM (MB):规格的RAM大小,如“128”。
Root Disk (GB):规格的根磁盘大小,如“1”。
其他保持默认。
[root@controller ~(keystone_admin)]# openstack flavor create --vcpus 1 --ram 128 --disk 1 Flavor_cli
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 2a6f1df1-ecd9-4ab8-b14c-e9b8570c6e88 |
| name | Flavor_cli |
| os-flavor-access:is_public | True |
| properties | |
| ram | 128 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+# 规格创建时默认为“Public”
密钥对和虚拟机组管理
创建密钥对“KeyPair_cli”
[root@controller ~(keystone_admin)]# openstack keypair create KeyPair_cli
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAxDxBkuiyRUfEAjKPtz/fmIUYvmF3oxGXihVLu21PX6wjQCnL
o64m2ghXGuBUyI5be6UsZbOoIZo8K7iE99bsoyXnTx9392LSQ/WGuVaxgBR7uSsg
IT7U+bsqly3yRgsnwFOOl2Pj6i3h59hY/ZRvSr50Mm8HhHFNm8iJngx/mZVBuKFK
yzvBpV+8cIZLcDf5hjf0rm/n4TycGpD3pTpX/PRktygygLv2EgfGgxEdot72kb5z
kByrD/STzemHSGyyjR9zrXlrn7yfcJDVic+Vrvyr3bEQEFgK8dDtUN+O6R+LoBkJ
sm/id5pfZVIAyqlyeL24jlry/Q8J+sdQ5p4iGwIDAQABAoIBABN6S6PyVueLhQgW
zq8Itv/jjh4vfHmCIIGDNZ4n7m33nxQaUe0wNwkDNOolBCVYA/qU3YBGwdR8A6bv
TLtw6NIUzA3NeNHkTCyUrUeuNDYbUmCByFGkc+1Jx6Nz2w1axBpR8OBT+OZgoYCq
t8KLvjQ0DUKIRL2/pU1mLUqzwOKUgoPdIEOpKvjyL3OY7n5d973OPgvuB9LWleMd
YeCL7SGmMmPZOgVV7oVa5o+6/MMRX3I1nPjhcrdV0IzFkwQMIDcTQzAOxllQ17Lj
kLNvuZ9CvGDs/BCDZvoH4+VcATXELqLqfyLlFSZQyPZSA+4fU+OFFLvNEOaYyZW3
Kith4KECgYEA6qy5JNbSzDZ0llWeQZ8XPkbj2iVZ9aXVhk0WwbUaHPujPx7A10C8
ooqw/D+KzdLrZUTP19EDfxNG6GOU1AVd7gXcGJ0WvLrHD2j7TQXkGQ0cMOMFz+nh
r1dg7rd22RGvsSPF2FM1OZwJWxveXy4bKYAMrtpzicafj74ep84lx+sCgYEA1hFP
/9Qpanf2viO06dlz7BhUOj+Rm3X8NabwWk3gLJ4/ykCKYw8zKV5u6uctgk1AF2/C
IX4Rg3ryEmwEqXAxCPUpNs5ZscOAIPFgvGJfil4YOGeqiudl95Y7V19xqSOci00E
DJmBGiSmaiG+eTBwdDpV9kQvUrcY2fddnceEMpECgYEArt8ux+jdBBfAIRaD61pl
s56Xw8L5mjeNOZrQTmBpqRdKuopsIPq4llbLM+0VvfJiPwBb8PJrrJHs0NcD3Epz
iB8Nt7m3a8Oy+iS8vtSY+KHwU+2YMyqRZluye7By+6ZWSaXilCTNELTZs+68ciil
TPOCf/mBBzXfSPnfViQjuykCgYEAuhaEr8U0V5x/f1y06VCiSAwCNDyMjFMtc0py
yF3IUaEjnOMsKd7Nv9manFNoqUwUOgtp/AmGmgBnrQH/r3ea+Ml+EWmiaTilCn4q
dLkjirovXeEoTOXJK0iKv3J18O3HKQVDTtymcR6JF9vLo7grGa6YiaNObB5E2T4D
QKRvVDECgYB594lZLG5p3bU9bf7Xucyda6EHsWnkxwcMC06Mpv92lnuAxhm08knN
wo64D7MIpfkIUAah4taCpBkg1JTfgTa8otjZGjTCN2TLQG4zooepxG34bMyCvOyD
nL8DChOOw3BnoUY1K9JCZ46eawbUPrzCYF4veG4L1Q9eZULHyFddag==
-----END RSA PRIVATE KEY-----
创建虚拟机组“ServerGroup_cli”,策略设置为“Affinity”
[root@controller ~(keystone_admin)]# openstack server group create --policy affinity ServerGroup_cli
+----------+--------------------------------------+
| Field | Value |
+----------+--------------------------------------+
| id | abf548a0-415a-4126-b3a4-9a6b37a0e714 |
| members | |
| name | ServerGroup_cli |
| policies | affinity |
+----------+--------------------------------------+# 记录虚拟机组“ServerGroup_cli”的ID
虚拟机实例操作
发放虚拟机实例
创建虚拟机实例“Instance_cli_01”,要求配置如下:
可用分区:nova。
镜像:Img_cli。
规格:Flavor_cli。
密钥对:KeyPair_cli。
虚拟机组:ServerGroup_cli。
网络:shared 。
[root@controller ~(keystone_admin)]# openstack server create --availability-zone nova --image Img_cli --flavor Flavor_cli --network shared --key-name KeyPair_cli --hint group=abf548a0-415a-4126-b3a4-9a6b37a0e714 Instance_cli_01
+-------------------------------------+---------------------------------------------------+
| Field | Value |
+-------------------------------------+---------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | FUgdKB4pLaG4 |
| config_drive | |
| created | 2025-09-17T10:46:26Z |
| flavor | Flavor_cli (2a6f1df1-ecd9-4ab8-b14c-e9b8570c6e88) |
| hostId | |
| id | 1fbb2e05-41dc-4a79-9556-65a20bb45c6b |
| image | Img_cli (b8b4d0e8-4366-436c-9d5b-0eb6ad6ddeb0) |
| key_name | KeyPair_cli |
| name | Instance_cli_01 |
| progress | 0 |
| project_id | bbe6a457a15b48d792da334eb27a5d7b |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2025-09-17T10:46:26Z |
| user_id | af704f24dc304c09a051f19c9f4d4efe |
| volumes_attached | |
+-------------------------------------+---------------------------------------------------+
查看虚拟机实例列表,虚拟机实例“Instance_cli_01”状态为“Active”表示虚拟机实例创建成功
[root@controller ~(keystone_admin)]# openstack server list
+--------------------------------------+-----------------+--------+-----------------------+---------+-----------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------+--------+-----------------------+---------+-----------------+
| 1fbb2e05-41dc-4a79-9556-65a20bb45c6b | Instance_cli_01 | ACTIVE | shared=192.168.233.84 | Img_cli | Flavor_cli |
| 841b46e7-98d0-473e-a24b-9649773b220d | Instance_web_01 | ACTIVE | shared=192.168.233.67 | Img_web | Flavor_web_test |
+--------------------------------------+-----------------+--------+-----------------------+---------+-----------------+
虚拟机实例开启、关闭与重启
关闭虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server stop Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | SHUTOFF |
启动虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server start Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | ACTIVE |
软重启虚拟机实例“Instance_cli_01”
[root@controller ~(keystone_admin)]# openstack server reboot Instance_cli_01
硬重启虚拟机实例“Instance_cli_01”
[root@controller ~(keystone_admin)]# openstack server reboot --hard Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | HARD_REBOOT |
虚拟机实例锁定、解锁
锁定虚拟机实例“Instance_cli_01”
[root@controller ~(keystone_admin)]# openstack server lock Instance_cli_01
解锁虚拟机实例“Instance_cli_01”
[root@controller ~(keystone_admin)]# openstack server unlock Instance_cli_01
虚拟机实例暂停、挂起和恢复
暂停虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server pause Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | PAUSED |
恢复虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server unpause Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | ACTIVE |
挂起虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server suspend Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | SUSPENDED |
恢复虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server resume Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | ACTIVE |
虚拟机实例废弃和取消废弃
废弃虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server shelve Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | SHELVED |
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | SHELVED_OFFLOADED |
取消废弃虚拟机实例“Instance_cli_01”,并查看其状态
[root@controller ~(keystone_admin)]# openstack server unshelve Instance_cli_01
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | SHELVED_OFFLOADED |
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep status
| status | ACTIVE |
虚拟机实例规格调整、快照和重建
为虚拟机实例“Instance_cli_01”创建快照“Instance_Snap_cli”
[root@controller ~(keystone_admin)]# openstack server image create --name Instance_Snap_cli Instance_cli_01
+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2025-09-17T10:57:16Z |
| file | /v2/images/4891d579-4b19-4532-bd31-c23a9401903d/file |
| id | 4891d579-4b19-4532-bd31-c23a9401903d |
| min_disk | 1 ......
查看镜像列表
[root@controller ~(keystone_admin)]# openstack image list
+--------------------------------------+-------------------+--------+
| ID | Name | Status |
+--------------------------------------+-------------------+--------+
| b8b4d0e8-4366-436c-9d5b-0eb6ad6ddeb0 | Img_cli | active |
| 3bbe74b3-62db-444b-a3ff-90f0477e811b | Img_web | active |
| 4891d579-4b19-4532-bd31-c23a9401903d | Instance_Snap_cli | active |
| 437821cd-35cf-47d6-b971-a33d70eb488b | Instance_Snap_web | active |
+--------------------------------------+-------------------+--------+
查看虚拟机实例“Instance_cli_01”的规格
[root@controller ~(keystone_admin)]# openstack server show Instance_cli_01 | grep flavor
| flavor | Flavor_cli (2a6f1df1-ecd9-4ab8-b14c-e9b8570c6e88) |
查看规格的详细信息
[root@controller ~(keystone_admin)]# openstack flavor show Flavor_cli
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| disk | 1 |
| id | 2a6f1df1-ecd9-4ab8-b14c-e9b8570c6e88 |
| name | Flavor_cli |
| os-flavor-access:is_public | True |
| properties | |
| ram | 128 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
创建一个新规格“Flavor_cli_new”,将规格的RAM (MB)设置为“156”,VCPU和RAM设置与“Flave_cli”保持一致
[root@controller ~(keystone_admin)]# openstack flavor create --vcpus 1 --ram 156 --disk 1 Flavor_cli_new
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | d76ebab5-1119-4de3-b9a8-9d771ad3fdc1 |
| name | Flavor_cli_new |
| os-flavor-access:is_public | True |
| properties | |
| ram | 156 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
为虚拟机实例“Instance_cli_01”调整实例大小
[root@controller ~(keystone_admin)]# openstack server resize --flavor Flavor_cli_new Instance_cli_01
查看虚拟机实例“Instance_cli_01”的状态
[root@controller ~(keystone_admin)]# openstack server list | grep Instance_cli_01
| 1fbb2e05-41dc-4a79-9556-65a20bb45c6b | Instance_cli_01 | ACTIVE | shared=192.168.233.84 | Img_cli | Flavor_cli |
| Value |
±---------------------------±-------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | d76ebab5-1119-4de3-b9a8-9d771ad3fdc1 |
| name | Flavor_cli_new |
| os-flavor-access:is_public | True |
| properties | |
| ram | 156 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
±---------------------------±-------------------------------------+
为虚拟机实例“Instance_cli_01”调整实例大小```bash
[root@controller ~(keystone_admin)]# openstack server resize --flavor Flavor_cli_new Instance_cli_01