当前位置: 首页 > news >正文

Openstack Eproxy 2025.1 安装指南

  • Openstack Eproxy 2025.1 安装指南
    • 1.环境准备
      • 1.1 配置环境变量
      • 1.2 配置网络
      • 1.3 修改主机名
      • 1.4 本地主机名解析hosts
      • 1.5 修改machine-id
      • 1.6 配置源
      • 1.7 时间服务器
      • 1.8 安装客户端
      • 1.9 安装数据库
      • 1.10 安装消息队列
      • 1.11 安装缓存服务
    • 2.Keystone 部署
      • 2.1 创建相关数据库
      • 2.2 安装 Keystone 软件包
      • 2.3 配置文件
      • 2.4 同步数据库(初始化 Keystone 数据库)
      • 2.5 初始化 Fernet 密钥
      • 2.6 初始化 Credential 加密密钥
      • 2.7 创建应用API
      • 2.8 Apache
      • 2.9 创建环境变量
      • 3.0 创建服务和用户
    • 3.glance 部署
      • 3.1.创建库,用户
      • 3.2.获得管理凭证·
      • 3.3 创建实体服务
      • 3.4.创建映像服务API端点
      • 3.5.安装服务
      • 3.6.配置文件
      • 3.7.同步数据库
      • 3.8.重启服务
      • 3.9 验证
    • 4.placement部署
      • 4.1 创建库,用户
      • 4.2 获取admin凭证以获取仅限管理员的CLI命令的访问权限
      • 4.3 创建实体服务
      • 4.4 安装服务
      • 4.5 配置文件
      • 4.6 同步数据库
      • 4.7 重启
    • 5.nova部署
      • 5.1 控制节点操作
        • 5.1.2 创建库、用户
        • 5.1.3 创建实体
        • 5.1.4 安装服务
        • 5.1.5 Eventlet的BUG
        • 5.1.6 配置文件
        • 5.1.7 同步数据库
        • 5.1.8 nova-nonvcproxy 配置文件BUG
        • 5.1.9 重启服务
      • 5.2 计算节点操作
        • 安装服务
        • 配置文件
        • 配置文件内容
        • 查看是否为0,为0需要配置
        • 重启服务
        • bug处理
        • 检查 libvirt 事件机制配置与权限
        • 重启生效
      • 主机发现
    • 6.Neutron部署
      • 所有节点修改hosts配置
      • 控制节点操作
        • 创建数据库
        • 创建实体服务
        • 安装服务
        • 配置服务文件
          • neutron.conf
          • ml2
          • openvswitch_agent
            • 配置文件
            • 创建ovs网桥并绑定设备
            • 开启内核支持网络网桥
            • DHCP配置
          • metadata
          • nova.conf
        • 重启网络服务
      • 计算节点操作
        • 安装服务
        • 配置文件
          • /etc/neutron/neutron.conf
          • /etc/neutron/plugins/ml2/opensvswitch_agent.ini
        • 创建ovs网桥并绑定设备
        • 开启内核支持网络网桥
        • 配置nova的neutron
        • 重启neutron服务
    • 7.Dashboard部署
      • 安装服务
      • 配置文件
      • 重启
    • 8.使用流程
      • 网络设置
        • 1.创建外部网络
        • 2.创建外部网络子网
        • 3.子网详情
        • 4.创建路由
        • 5.创建内部网络
        • 6.创建内部网络子网
        • 7.内部接口连上ext_router
      • 安全规则
      • 创建实例类型
      • 创建实例
      • 绑定浮动IP
    • 其它

Openstack Eproxy 2025.1 安装指南

1.环境准备

1.1 配置环境变量

  • 控制节点(在控制节点上操作,下同)
  • 创建管理openstack的变量文件
root@controller:~# cat ./admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
  • 设置开机加载管理变量
root@controller:~# :nano /etc/profile
...
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
root@controller:~# :source /etc/profile

1.2 配置网络

  • 控制节点
vim /etc/netplan/50-cloud-init.yaml
network:version: 2ethernets:ens33:dhcp4: falseaddresses:- "10.0.0.10/24"routes:- to: defaultvia: 10.0.0.2nameservers:addresses:- 223.5.5.5ens37:dhcp4: false
  • 计算节点 (在计算节点操作,下同)
vim /etc/netplan/50-cloud-init.yaml
network:version: 2ethernets:ens33:dhcp4: falseaddresses:- "10.0.0.11/24"routes:- to: defaultvia: 10.0.0.2nameservers:addresses:- 223.6.6.6ens37:dhcp4: false

注:这里我们设置计算节点IP地址为10.0.0.11/24,您可以修改为您规划的IP。

  • 执行配置
root@compute:~# netplan apply

1.3 修改主机名

  • 控制节点
hostnamectl set-hostname controller
  • 计算节点
hostnamectl set-hostname compute

1.4 本地主机名解析hosts

  • 所有节点
nano /etc/hosts
10.0.0.10 controller
10.0.0.11 compute

1.5 修改machine-id

root@controller:~# cat /etc/machine-id
f9ebfd5adc0e4ac78709d954557e882f
root@controller:~# truncate -s 0 /etc/machine-id
root@controller:~# cat /etc/machine-id
root@controller:~#  systemd-machine-id-setup
Initializing machine ID from random generator.
root@controller:~# cat /etc/machine-id
8720c4fe8aba486da7d3414b8721f260
root@controller:~# 

1.6 配置源

  • 所有节点
    修改源为清华源,加快下载速度。
mv  /etc/apt/sources.list.d/ubuntu.sources /etc/apt/sources.list.d/ubuntu.sources.bak
cp /etc/apt/sources.list /etc/apt/sources.list.bak
nano /etc/apt/sources.list
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ plucky-security main restricted universe multiverseapt clean all ; apt update

1.7 时间服务器

⚠️
在继续之前您必须了解一些事情。

  1. 如果您在虚拟机中运行 OpenStack,建议使用 chrony 而不是 ntpd 作为 NTP 服务,因为 chrony 更适合虚拟化环境。
  2. 确保您的系统时间和时区正确配置,这对于 OpenStack 的正常运行非常重要。
  3. 如果您的环境中有多个节点,建议所有节点都使用相同的 NTP 服务器以确保时间同步。
  4. 如果您在防火墙后面运行 OpenStack,请确保允许 NTP 流量通过(通常是 UDP 端口 123)。
  5. 如果您使用的是云提供商的虚拟机,请检查他们是否提供了内置的时间同步服务,并根据需要进行配置。
timedatectl set-ntp true
  • 启用系统的 NTP(网络时间协议)同步功能,让系统自动从网络获取时间。调整时区为上海
timedatectl set-timezone Asia/Shanghai
  • 安装 chrony,这是一个轻量级、精准且适用于服务器和虚拟机的 NTP 客户端/服务端程序,比传统的 ntpd 更适合现代系统。
apt install chrony -y
  • 使用 vim 编辑器打开 chrony 的配置文件,准备添加或修改 NTP 服务器。
cp  /etc/chrony/sources.d/ubuntu-ntp-pools.sources  /etc/chrony/sources.d/ubuntu-ntp-pools.sources.bak
nano /etc/chrony/sources.d/ubuntu-ntp-pools.sources
  • 在配置文件中指定使用阿里云的 NTP 服务器 ntp6.aliyun.com,并启用 iburst 选项(加快首次同步速度),最多使用 4 个源。
pool ntp6.aliyun.com iburst maxsources 4

重启 chronyd 服务,使配置生效。

systemctl restart chronyd
  • 查看当前 chrony 正在使用的时间源及其状态,验证同步是否成功。
chronyc sources -v

1.8 安装客户端

  • 在控制节点安装
apt install python3-openstackclient -y

1.9 安装数据库

apt install mariadb-server python3-pymysql
nano  /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 10.0.0.10default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8service mysql restart
ss -tnl

1.10 安装消息队列

apt install rabbitmq-server
rabbitmqctl add_user openstack 000000
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

1.11 安装缓存服务

apt install -y memcached python3-memcache
nano  /etc/memcached.conf
-l 0.0.0.0
service memcached restart

State                    Recv-Q                   Send-Q                                       Local Address:Port                                        Peer Address:Port                   
LISTEN                   0                        1024                                               0.0.0.0:11211                                            0.0.0.0:*                      
LISTEN                   0                        128                                              127.0.0.1:6010                                             0.0.0.0:*                      
LISTEN                   0                        4096                                         127.0.0.53%lo:53                                               0.0.0.0:*                      
LISTEN                   0                        128                                                0.0.0.0:25672                                            0.0.0.0:*                      
LISTEN                   0                        4096                                            127.0.0.54:53                                               0.0.0.0:*                      
LISTEN                   0                        4096                                               0.0.0.0:22                                               0.0.0.0:*                      
LISTEN                   0                        869                                              10.0.0.10:3306                                             0.0.0.0:*                      
LISTEN                   0                        128                                                      *:5672                                                   *:*                      
LISTEN                   0                        128                                                  [::1]:6010                                                [::]:*                      
LISTEN                   0                        4096                                                     *:4369                                                   *:*                      
LISTEN                   0                        1024                                                 [::1]:11211                                               [::]:*                      
LISTEN                   0                        4096                                                  [::]:22                                                  [::]:*   

2.Keystone 部署

⚠️ 提示
Keystone 是 OpenStack 的“身份服务”(Identity Service),负责用户认证、服务目录、权限管理。
部署分“控制节点”(controller node)和“计算节点”(compute node),这里只涉及控制节点。

  • 控制节点

2.1 创建相关数据库

root@controller:~# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 31
Server version: 11.4.7-MariaDB-0ubuntu0.25.04.1 Ubuntu 25.04Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Support MariaDB developers by giving a star at https://github.com/MariaDB/server
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.001 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'ang';
Query OK, 0 rows affected (0.028 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'ang';
Query OK, 0 rows affected (0.001 sec)MariaDB [(none)]> 

2.2 安装 Keystone 软件包

apt install keystone -y

2.3 配置文件

cp /etc/keystone/keystone.conf{,.bak}
grep -Ev "^$|^#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.confvim /etc/keystone/keystone.conf
# ...
[database]
connection = mysql+pymysql://keystone:ang@controller/keystone
# 替换ang为您为数据库选择的密码。
# 注释掉或删除connection该部分中的任何其他选项 [database]。比如删除:connection = sqlite:////var/lib/keystone/keystone.db[token]
provider = fernet

2.4 同步数据库(初始化 Keystone 数据库)

su -s /bin/sh -c "keystone-manage db_sync" keystone
# 这条命令没有输出。

2.5 初始化 Fernet 密钥

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
生成 Fernet 格式的 Token 加密密钥,并确保密钥文件的所有者为 keystone 用户和组。
Fernet 密钥用于安全地签发和验证 Keystone 的认证令牌(Token)。

2.6 初始化 Credential 加密密钥

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
生成用于加密 用户凭证(如密码) 的密钥,同样确保文件权限属于 keystone 用户和组。
该密钥用于保护存储在 Keystone 中的敏感凭据信息。

2.7 创建应用API

keystone-manage bootstrap \--bootstrap-password 000000 \--bootstrap-admin-url http://controller:5000/v3/ \--bootstrap-internal-url http://controller:5000/v3/ \--bootstrap-public-url http://controller:5000/v3/ \--bootstrap-region-id RegionOne
# 这里的'000000'请用适合管理用户的密码替换。

2.8 Apache

vim /etc/apache2/apache2.conf# 如果条目ServerName尚不存在,则需要添加。
ServerName controllerservice apache2 restart

2.9 创建环境变量

vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2# 临时手动加载文件
source admin-openrc# 全局生效,系统会在所有用户登录时自动加载该目录下的脚本。
cp admin-openrc /etc/profile.d/
chmod +x /etc/profile.d/admin-openrc  # 确保可执行

3.0 创建服务和用户

openstack project create --domain default --description "Service Project" service

输出如下:

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 390f6922a43e4d6598d64444521acb3b |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

验证openstack

openstack token issue
+------------+----------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                          |
+------------+----------------------------------------------------------------------------------------------------------------+
| expires    | 2025-08-02T13:14:50+0000                                                                                       |
| id         | gAAAAABojgE65dEJoptrdFt6mX4i9TzmHWzlRAYcNOOj929JBRfaM-                                                         |
|            | H7VQ4E0cJb70xuoH6bcBymMavm1uPWycLJhHN4Ri9N3fGFZV41hAR03IRryqWVxjZ855WQ4wXaoGP_mByNHboviytIY6lL3ppV6qOwr3EuJogB |
|            | ofIjS7DeRuHnVLKf6HQ                                                                                            |
| project_id | 9393e26310b643668953d2ea3fd3c323                                                                               |
| user_id    | 10a0673a87824cbe96b28a9c1da93612                                                                               |
+------------+----------------------------------------------------------------------------------------------------------------+

3.glance 部署

3.1.创建库,用户

mysql
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'ang';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'ang';

3.2.获得管理凭证·

. admin-openrc

3.3 创建实体服务

  • 创建glance 用户:
openstack user create --domain default --password glance glance
root@controller:~# openstack user create --domain default --password glance glance

±--------------------±---------------------------------+
| Field | Value |
±--------------------±---------------------------------+
| domain_id | default |
| enabled | True |
| id | acd89a5b607642cfa0295265113d8bec |
| name | glance |
| options | {} |
| password_expires_at | None |
±--------------------±---------------------------------+
root@controller:~#

  • 添加admin角色到glance用户和 service项目:
openstack role add --project service --user glance admin

注意:此命令不提供输出。

  • 创建glance 服务实体
openstack service create --name glance --description "OpenStack Image" image
root@controller:~# openstack service create --name glance --description "OpenStack Image" image

±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | bdaaeb5256b3457b82709095b18daa76 |
| name | glance |
| type | image |
±------------±---------------------------------+
root@controller:~#

3.4.创建映像服务API端点

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

比如:

root@controller:~# openstack user create --domain default --password glance glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | None                             |
| domain_id           | default                          |
| email               | None                             |
| enabled             | True                             |
| id                  | 9b284507241e419e9cea2d963c58a3ae |
| name                | glance                           |
| description         | None                             |
| password_expires_at | None                             |
+---------------------+----------------------------------+
root@controller:~# openstack role add --project service --user glance admin
root@controller:~# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| id          | c77fb8a163b34ced86efeb4fcd727ddd |
| name        | glance                           |
| type        | image                            |
| enabled     | True                             |
| description | OpenStack Image                  |
+-------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7f9adaf87805449783ba1da861321327 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c77fb8a163b34ced86efeb4fcd727ddd |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 622c62b6d44e4986ae50e2d568b43642 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c77fb8a163b34ced86efeb4fcd727ddd |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bad502cb2ee9484fb49fd9a786c17663 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c77fb8a163b34ced86efeb4fcd727ddd |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

3.5.安装服务

apt install glance -y

3.6.配置文件

  • 备份并去除注释
cp /etc/glance/glance-api.conf{,.bak}
grep -Ev "^\s*(#|$)" /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
  • 编辑配置文件
vim /etc/glance/glance-api.conf
[DEFAULT]
enabled_backends=fs:file[database]
connection = mysql+pymysql://glance:ang@controller/glance[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance[paste_deploy]
flavor = keystone[glance_store]
default_backend = fs[fs]
filesystem_store_datadir = /var/lib/glance/images/

3.7.同步数据库

su -s /bin/sh -c "glance-manage db_sync" glance
...
2025-08-05 10:11:56.495 15448 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
Database is synced successfully.

3.8.重启服务

service glance-api restart

3.9 验证

  • 查看服务是否正常运行
tail -f /var/log/glance/glance-api.log
...
2025-08-05 10:13:29.296 15480 INFO eventlet.wsgi.server [-] (15480) wsgi starting up on http://0.0.0.0:9292
  • 验证举例创建镜像
 openstack image create --disk-format qcow2 --file ./cirros-0.4.0-x86_64-disk.img cirros
root@controller:~#  openstack image create --disk-format qcow2 --file ./cirros-0.4.0-x86_64-disk.img cirros
+------------------+------------------------------------------------------------------------+
| Field            | Value                                                                  |
+------------------+------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                       |
| container_format | bare                                                                   |
| created_at       | 2025-08-05T02:24:54Z                                                   |
| disk_format      | qcow2                                                                  |
| file             | /v2/images/a599fc71-b6de-4057-ab78-bf3abbd4f387/file                   |
| id               | a599fc71-b6de-4057-ab78-bf3abbd4f387                                   |
| min_disk         | 0                                                                      |
| min_ram          | 0                                                                      |
| name             | cirros                                                                 |
| owner            | 9393e26310b643668953d2ea3fd3c323                                       |
| properties       | os_hash_algo='sha512', os_hash_value='6513f21e44aa3da349f248188a44bc30 |
|                  | 4a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e2161b5b5186106570c17a9 |
|                  | e58b64dd39390617cd5a350f78', os_hidden='False',                        |
|                  | owner_specified.openstack.md5='',                                      |
|                  | owner_specified.openstack.object='images/cirros',                      |
|                  | owner_specified.openstack.sha256='', stores='fs'                       |
| protected        | False                                                                  |
| schema           | /v2/schemas/image                                                      |
| size             | 12716032                                                               |
| status           | active                                                                 |
| tags             |                                                                        |
| updated_at       | 2025-08-05T02:24:54Z                                                   |
| virtual_size     | 46137344                                                               |
| visibility       | shared                                                                 |
+------------------+------------------------------------------------------------------------+
root@controller:~# 
root@controller:~# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| a599fc71-b6de-4057-ab78-bf3abbd4f387 | cirros | active |
+--------------------------------------+--------+--------+
root@controller:~# ll /var/lib/glance/images/
total 12428
drwxr-x--- 2 glance glance     4096 Aug  5 10:24 ./
drwxr-x--- 7 glance glance     4096 Aug  5 10:13 ../
-rw-r----- 1 glance glance 12716032 Aug  5 10:24 a599fc71-b6de-4057-ab78-bf3abbd4f387

4.placement部署

4.1 创建库,用户

  • 连接数据库服务器
mysql
  • 创建placement数据并赋予数据库适当的权限
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'ang';GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'ang';

4.2 获取admin凭证以获取仅限管理员的CLI命令的访问权限

$. admin-openrc

4.3 创建实体服务

openstack user create --domain default --password placement placementopenstack role add --project service --user placement admin
#此命令不提供输出openstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://controller:8778openstack endpoint create --region RegionOne placement internal http://controller:8778openstack endpoint create --region RegionOne placement admin http://controller:8778

举例:

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | None                             |
| domain_id           | default                          |
| email               | None                             |
| enabled             | True                             |
| id                  | 38071a47f09c4e2a9614d2e2e3f38bb9 |
| name                | placement                        |
| description         | None                             |
| password_expires_at | None                             |
+---------------------+----------------------------------+
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| id          | 1aca8a8bbc3848e19a6294e89595e9ac |
| name        | placement                        |
| type        | placement                        |
| enabled     | True                             |
| description | Placement API                    |
+-------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 970655269a0f424b954aa6d203ad5d23 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1aca8a8bbc3848e19a6294e89595e9ac |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 15f01a6402724755abbec59a14086d7d |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1aca8a8bbc3848e19a6294e89595e9ac |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 57c4c615f2214159a94571c98e7617f7 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 1aca8a8bbc3848e19a6294e89595e9ac |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

4.4 安装服务

apt install placement-api -y

4.5 配置文件

cp /etc/placement/placement.conf{,.bak}
grep -Ev "^\s*(#|$)" /etc/placement/placement.conf.bak > /etc/placement/placement.confvim /etc/placement/placement.conf
...
[placement_database]
connection = mysql+pymysql://placement:ang@controller/placement[api]
auth_strategy = keystone[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement

4.6 同步数据库

su -s /bin/sh -c "placement-manage db sync" placement

注意:忽略此输出中的任何弃用消息。

4.7 重启

service apache2 restart

5.nova部署

5.1 控制节点操作

5.1.2 创建库、用户
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ang';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ang';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ang';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ang';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ang';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ang';
5.1.3 创建实体
openstack user create --domain default --password nova novaopenstack role add --project service --user nova adminopenstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://controller:8774/v2.1openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

举例:


+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | None                             |
| domain_id           | default                          |
| email               | None                             |
| enabled             | True                             |
| id                  | 9d694fd613054e73bf88057bf7a19b7b |
| name                | nova                             |
| description         | None                             |
| password_expires_at | None                             |
+---------------------+----------------------------------+
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| id          | cd7e94877f6c49c9ba5f78ab03fc6d50 |
| name        | nova                             |
| type        | compute                          |
| enabled     | True                             |
| description | OpenStack Compute                |
+-------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8a76b75588034180ab6feb70584177f7 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd7e94877f6c49c9ba5f78ab03fc6d50 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d28e700596ae4ee4ba0f11fc373780a8 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd7e94877f6c49c9ba5f78ab03fc6d50 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 3cd4ae47e4354c5888ee4b5b6bc18a0d |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd7e94877f6c49c9ba5f78ab03fc6d50 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
5.1.4 安装服务
apt install nova-api nova-conductor nova-novncproxy nova-scheduler -y
5.1.5 Eventlet的BUG
  • 在同步数据库时出现eventlet.monkey_path()错误
root@controller:~# su -s /bin/sh -c "nova-manage api_db sync" nova
3 RLock(s) were not greened, to fix this error make sure you run eventlet.monkey_patch() before importing any other modules.
  • 编辑/usr/bin/nova-manage
vi $(which nova-manage)
  • 在开头加上两行:
"/usr/bin/nova-manage" 12L, 195B                                        12,1          All
#! /usr/bin/python3
# PBR Generated from 'console_scripts'
import eventlet           #增加此行代码
eventlet.monkey_patch()   #增加此行代码import sysfrom nova.cmd.manage import mainif __name__ == "__main__":sys.exit(main())
  • 同样修改nova-scheduler , nova-conductror , nova-novncproxy
5.1.6 配置文件
cp /etc/nova/nova.conf{,.bak}
grep -Ev "^\s*(#|$)" /etc/nova/nova.conf.bak > /etc/nova/nova.confvim /etc/nova/nova.conf[DEFAULT]
transport_url = rabbit://openstack:000000@controller:5672/
my_ip = 10.0.0.10[api_database]
connection = mysql+pymysql://nova:ang@controller/nova_api[database]
connection = mysql+pymysql://nova:ang@controller/nova[api]
auth_strategy = keystone[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip[glance]
api_servers = http://controller:9292[oslo_concurrency]
lock_path = /var/lib/nova/tmp[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
5.1.7 同步数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
--transport-url not provided in the command line, using the value [DEFAULT]/transport_url from the configuration file
--database_connection not provided in the command line, using the value [database]/connection from the configuration file
af7f7fd4-7739-479a-aa40-ee8cd82903a2
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
|  Name |                 UUID                 |              Transport URL               |               Database Connection               | Disabled |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                  none:/                  | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |
| cell1 | af7f7fd4-7739-479a-aa40-ee8cd82903a2 | rabbit://openstack:****@controller:5672/ |    mysql+pymysql://nova:****@controller/nova    |  False   |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
5.1.8 nova-nonvcproxy 配置文件BUG
nano /usr/lib/systemd/system/nova-novncproxy.serviceExecStart=/etc/init.d/nova-novncproxy systemd-start
修改为:
ExecStart=/usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
5.1.9 重启服务
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

将这四个重启命令编辑成为一个bash文件:

nano nova.sh
bash nova.sh
tail -f /var/log/nova/nova-
...
# 查看nova服务的日志
root@controller:~# ls -l  /var/log/nova/nova-*
-rw-r----- 1 nova nova 64135271 Aug  5 16:21 /var/log/nova/nova-api.log
-rw-r----- 1 nova nova   960569 Aug  5 16:21 /var/log/nova/nova-conductor.log
-rw-rw-r-- 1 nova nova     4807 Aug  5 16:10 /var/log/nova/nova-manage.log
-rw-r----- 1 nova nova     1151 Aug  5 16:21 /var/log/nova/nova-novncproxy.log
-rw-r----- 1 nova nova  2222833 Aug  5 16:21 /var/log/nova/nova-scheduler.log
root@controller:~# tail -f /var/log/nova/nova-api.log
2025-08-05 16:21:41.521 27592 INFO nova.service [-] metadata listening on 0.0.0.0:8775
2025-08-05 16:21:41.521 27592 INFO oslo_service.backend.eventlet.service [-] Starting 1 workers
2025-08-05 16:21:41.772 27592 WARNING oslo_config.cfg [-] Deprecated: Option "api_servers" from group "glance" is deprecated for removal (
Support for image service configuration via standard keystoneauth1 Adapter
options was added in the 17.0.0 Queens release. The api_servers option was
retained temporarily to allow consumers time to cut over to a real load
balancing solution.
).  Its value may be silently ignored in the future.
2025-08-05 16:21:42.781 27649 INFO nova.osapi_compute.wsgi.server [None req-86714d1e-2646-48c3-b0e9-1299be3b4d31 - - - - - -] (27649) wsgi starting up on http://0.0.0.0:8774
2025-08-05 16:21:42.909 27652 INFO nova.metadata.wsgi.server [None req-e386811e-4a6e-42dd-bb3b-0414869b57f1 - - - - - -] (27652) wsgi starting up on http://0.0.0.0:8775
^C
root@controller:~# tail -f /var/log/nova/nova-conductor.log
testing purposes only and should not be used in deployments. This option and
its middleware, NoAuthMiddleware[V2_18], will be removed in a future release.
).  Its value may be silently ignored in the future.
2025-08-05 16:21:46.857 27609 WARNING oslo_config.cfg [None req-0ee98c56-80b7-4526-bed8-7c0183ca4982 - - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal (
Support for image service configuration via standard keystoneauth1 Adapter
options was added in the 17.0.0 Queens release. The api_servers option was
retained temporarily to allow consumers time to cut over to a real load
balancing solution.
).  Its value may be silently ignored in the future.
2025-08-05 16:21:46.881 27609 INFO nova.service [-] Starting conductor node (version 31.0.0)
^C
root@controller:~# tail -f /var/log/nova/nova-manage.log
2025-08-05 16:10:03.387 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Will assume non-transactional DDL.
2025-08-05 16:10:03.406 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade  -> 8f2f1571d55b, Initial version
2025-08-05 16:10:05.522 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 8f2f1571d55b -> 16f1fbcab42b, Resolve shadow table diffs
2025-08-05 16:10:05.549 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 16f1fbcab42b -> ccb0fa1a2252, Add encryption fields to BlockDeviceMapping
2025-08-05 16:10:05.614 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade ccb0fa1a2252 -> 960aac0e09ea, de-duplicate_indexes_in_instances__console_auth_tokens
2025-08-05 16:10:05.635 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 960aac0e09ea -> 1b91788ec3a6, Drop legacy migrate_version table
2025-08-05 16:10:05.640 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 1b91788ec3a6 -> 1acf2c98e646, Add compute_id to instance
2025-08-05 16:10:05.697 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 1acf2c98e646 -> 13863f4e1612, create_share_mapping_table
2025-08-05 16:10:05.729 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
2025-08-05 16:10:05.774 25803 INFO alembic.runtime.migration [None req-7c00ae54-eda9-4413-be7c-84fc6284f6dc - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
^C
root@controller:~# tail -f /var/log/nova/nova-novncproxy.log
2025-08-05 15:51:11.986 19823 INFO nova.console.websocketproxy [-]   - Listen on 0.0.0.0:6080
2025-08-05 15:51:11.986 19823 INFO nova.console.websocketproxy [-]   - Web server (no directory listings). Web root: /usr/share/novnc
2025-08-05 15:51:11.986 19823 INFO nova.console.websocketproxy [-]   - No SSL/TLS support (no cert file)
2025-08-05 15:51:11.986 19823 INFO nova.console.websocketproxy [-]   - proxying from 0.0.0.0:6080 to None:None
2025-08-05 16:21:31.184 19823 INFO nova.console.websocketproxy [-] In exit
2025-08-05 16:21:43.867 27636 INFO nova.console.websocketproxy [-] WebSocket server settings:
2025-08-05 16:21:43.867 27636 INFO nova.console.websocketproxy [-]   - Listen on 0.0.0.0:6080
2025-08-05 16:21:43.867 27636 INFO nova.console.websocketproxy [-]   - Web server (no directory listings). Web root: /usr/share/novnc
2025-08-05 16:21:43.867 27636 INFO nova.console.websocketproxy [-]   - No SSL/TLS support (no cert file)
2025-08-05 16:21:43.868 27636 INFO nova.console.websocketproxy [-]   - proxying from 0.0.0.0:6080 to None:None
^C
root@controller:~# tail -f /var/log/nova/nova-scheduler.log
testing purposes only and should not be used in deployments. This option and
its middleware, NoAuthMiddleware[V2_18], will be removed in a future release.
).  Its value may be silently ignored in the future.
2025-08-05 16:21:44.404 27599 WARNING oslo_config.cfg [None req-b1fd4476-fca9-4f36-a61e-bd329c517a9e - - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal (
Support for image service configuration via standard keystoneauth1 Adapter
options was added in the 17.0.0 Queens release. The api_servers option was
retained temporarily to allow consumers time to cut over to a real load
balancing solution.
).  Its value may be silently ignored in the future.
2025-08-05 16:21:44.429 27599 INFO nova.service [-] Starting scheduler node (version 31.0.0)
^C

5.2 计算节点操作

安装服务
apt install nova-compute -y
配置文件
cp /etc/nova/nova.conf{,.bak}
grep -Ev "^\s*(#|$)" /etc/nova/nova.conf.bak > /etc/nova/nova.confvim /etc/nova/nova.conf
配置文件内容

⚠️ 重点
my_ip = 10.0.0.11

[DEFAULT]
transport_url = rabbit://openstack:000000@controller
my_ip = 10.0.0.11[api_database]
删除或者注释掉#connection = sqlite:////var/lib/nova/nova_api.sqlite
[database]
删除或者注释掉#connection = sqlite:////var/lib/nova/nova.sqlite[api]
auth_strategy = keystone[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = nova[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://10.0.0.10:6080/vnc_auto.html[glance]
api_servers = http://controller:9292[oslo_concurrency]
lock_path = /var/lib/nova/tmp[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
查看是否为0,为0需要配置
grep -c '(qemu|kvm)' /proc/cpuinfo

编辑配置文件

vim /etc/nova/nova-compute.conf
[libvirt]
virt_type = qemu
重启服务
service nova-compute restart
bug处理
  • 仅对计算节点修改
cp /usr/lib/python3/dist-packages/nova/network/neutron.py{,.bak}vim /usr/lib/python3/dist-packages/nova/network/neutron.py
174 class ClientWrapper:
175     """A Neutron client wrapper class.
176
177     Wraps the callable methods, catches Unauthorized, Forbidden from Neutron and
178     convert it to a 401,403 for Nova clients.
179     """
180
181     def __init__(self, base_client, admin):
182         self.base_client = base_client
183         self.admin = admin
184
185     def __getattr__(self, name):
186         base_attr = getattr(self.base_client, name)
187         # Each callable base client attr is wrapped so that we can translate
188         # the Unauthorized exception based on if we have an admin client or
189         # not.
190         if callable(base_attr):
191             return self.proxy(base_attr)193         return base_attr
检查 libvirt 事件机制配置与权限

libvirt 的事件监听依赖 Unix socket 或 TCP 端口,权限不足或配置错误会导致 nova-compute 无法与 libvirt 通信。

  • 检查 libvirt socket 权限:
ls -l /var/run/libvirt/libvirt-sock

正常权限应为 srwxrwxrwx 或包含 libvirt 组权限(nova 用户需加入 libvirt 组)。
比如:

  • 若权限不足,添加 nova 到 libvirt 组:
usermod -aG libvirt nova
重启生效
service nova-compute restart

主机发现

openstack compute service list --service nova-computesu -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

6.Neutron部署

所有节点修改hosts配置

vim /etc/hosts
注释以下两行:
# 127.0.0.1 localhost
# 127.0.1.1 ubuntu

控制节点操作

创建数据库
mysql
CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ang';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ang';
创建实体服务
openstack user create --domain default --password neutron neutronopenstack role add --project service --user neutron adminopenstack service create --name neutron --description "Openstack Networking" networkopenstack endpoint create --region RegionOne network public http://controller:9696openstack endpoint create --region RegionOne network internal http://controller:9696openstack endpoint create --region RegionOne network admin http://controller:9696
安装服务
apt install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent -y
配置服务文件
neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep -Ev "^#|^$" /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true[database]
删除或者注释掉原来的配置:#
connection = mysql+pymysql://neutron:ang@controller/neutron[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = neutron[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = nova[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
ml2
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev "^#|^$" /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.inivim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security[ml2_type_flat]
flat_networks = provider[ml2_type_vxlan]
vni_ranges = 1:1000
openvswitch_agent
配置文件
cp /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev "^#|^$" /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.inivim /etc/neutron/plugins/ml2/openvswitch_agent.ini[agent]
tunnel_types = vxlan
l2_population = true[ovs]
bridge_mappings = provider:br_ens37
local_ip = 10.0.0.10[securitygroup]
enable_security_group = true
firewall_driver = openvswitch
创建ovs网桥并绑定设备
  • 确认openvswitch-switch服务运行正常
    root@controller:~# systemctl status openvswitch-switch
  • 确认/var/run/openvswitch目录存在
root@controller:~# ls -l /var/run/openvswitch/
total 8
srwxr-x--- 1 root root 0 Aug 24 20:34 br_ens19.mgmt
srwxr-x--- 1 root root 0 Aug 24 20:34 br_ens19.snoop
  • 配置网络桥和端口
ovs-vsctl add-br br_ens37
ovs-vsctl add-port br_ens37 ens37
  • 验证结果
root@controller:~# ovs-vsctl show
c637de9c-97e6-4241-bc04-e7e1adc72731Bridge br_ens19Port ens19Interface ens19Port br_ens19Interface br_ens19type: internalovs_version: "3.5.0"
root@controller:~# 
root@controller:~# ip addr show ens19
root@controller:~# ip addr show br_ens19
开启内核支持网络网桥
  • 确保br_netfilter模块
root@controller:~# lsmod | grep br_netfilter
  • 如果没有该模块则添加
root@controller:~# modprobe br_netfilter
  • 永久解决
root@controller:~# nano /etc/modules-load.d/br_netfilter.conf(如果不存在则创建)
#添加:
br_netfilter
  • 开启内核网络网桥
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1sysctl -p
DHCP配置
cp /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev "^#|^$" /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.inivim /etc/neutron/dhcp_agent.ini
[DEFAULT]
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
metadata
cp /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev "^#|^$" /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.inivim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = ang
nova.conf
  1. 配置nova
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = ang
  1. 同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  1. 重启生效
service nova-api restart
重启网络服务
service neutron-server restart
service neutron-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

以上命令可创建neutron.sh

root@controller:~# ls
admin-openrc  cirros-0.4.0-x86_64-disk.img  nova.shvim neutron.sh
service neutron-server restart
service neutron-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart
bash neutron.shtail -f /var/log/neutron/neutron-*

计算节点操作

Neutron 视频8分29秒

安装服务

apt install neutron-openvswitch-agent -y

配置文件
/etc/neutron/neutron.conf

cp /etc/neutron/neutron.conf{,.bak}
grep -Ev “#|$” /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
[Database]
删除默认数据库或者注释掉
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

/etc/neutron/plugins/ml2/opensvswitch_agent.ini

注意:INI 文件对格式敏感,需严格遵循 “键值对顶格、区块标识正确” 的规则。
cp /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev “#|$” /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.ini

vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
bridge_mappings = provider:br_ens37
local_ip = 10.0.0.11
[agent]
tunnel_types = vxlan
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = openvswitch

创建ovs网桥并绑定设备
# 
#如果ens37是down,则需要:
ip link set ens19 up# openvswitch-switch 状态为avtive,否则:systemctl start openvswitch-switchovs-vsctl add-br br_ens37
ovs-vsctl add-port br_ens37 ens37
开启内核支持网络网桥
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl -p
root@compute1:~# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
配置nova的neutron
  1. 配置nova中的neutron内容
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

注: 计算节点不需要认证信息。
2. 重启生效
service nova-compute restart

重启neutron服务

service neutron-openvswitch-agent restart

7.Dashboard部署

  • 控制节点操作

安装服务

apt install openstack-dashboard -y

配置文件

vim /etc/openstack-dashboard/local_settings.py
#注意修改controller
CACHES = {'default': {'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache','LOCATION': 'controller:11211',}
}SESSION_ENGINE = 'django.contrib.sessions.backends.cache'OPENSTACK_HOST = "controller"# OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST修改为:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOSTOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = TrueOPENSTACK_API_VERSIONS = {"identity": 3,"image": 2,"volume": 3,
}OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"TIME_ZONE = "Asia/Shanghai"

重启

root@controller:~# systemctl status apache2

8.使用流程

网络设置

1.创建外部网络

管理员—>网络—>创建网络—>网络
名称:ext_net
项目:admin
供应商类型:flat
#说明:provider是/etc/neutron/plugins/ml2/openvswitch_agent.ini中[ovs]bridge_mappings = provider:br_ens19
物理网络:provider
启用管理员状态:√
共享的:□
外部网络:√
创建子网:√

在这里插入图片描述

2.创建外部网络子网

管理员—>网络—>创建网络—>子网
子网名称:ext_sub
网络地址:192.168.8.0/24
网关:192.168.8.1
外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

3.子网详情

管理员—>网络—>创建网络—>子网详情
DNS服务器:8.8.8.8
在这里插入图片描述

4.创建路由

项目—>网络—>路由—>新建路由
路由名称:ext_router
外部网络:ext_net
在这里插入图片描述

5.创建内部网络

网络
名称:int_net
项目:admin
供应商网络类型:VXLAN
段 ID:2
启用管理员状态:√
共享的:√
外部网络:□
创建子网:√
可用域提示:nova
MTU:?
在这里插入图片描述

6.创建内部网络子网

子网
子网名称:int_sub
网络地址:166.66.66.0/24
IP 版本:IPv4
网关 IP:□
禁用网关:☑

在这里插入图片描述

7.内部接口连上ext_router

项目—>路由—>ext_router–>接口—>增加接口
增加接口:子网选择int_net
刷新页面后接口状态为“运行中”

安全规则

项目—>网络—>安全组—>管理规则
增加:所有TCP、UDP、ICMP入口放行规则。

创建实例类型

管理员—>实例类型—>创建实例类型
名称: 1V_512MB_1
VCPU数量:1
内存(MB):512MB
根磁盘(GB):1GB

创建实例

项目—>实例—>创建实例
网络:int_net

绑定浮动IP

选择实例—>动作—>管理浮动IP–>IP地址“+”号–>分配IP
SSH可访问浮动IP

其它

编辑 /etc/nova/nova.conf(在所有 Nova 服务节点上):

确保 [scheduler] 部分有:
discover_hosts_in_cells_interval = 300 # 每 5 分钟自动发现,设置为 0 禁用

nova-manage cell_v2 discover_hosts --verbose
这会扫描所有已注册主机并映射到默认单元(cell1)。运行后,再次执行步骤 1 中的 list_cell_hosts 验证。

http://www.dtcms.com/a/359890.html

相关文章:

  • isat将标签转化为labelme格式后,labelme打不开的解决方案
  • IO_hw_8.29
  • TRELLIS:从多张图片生成3D模型
  • 【ACP】2025-最新-疑难题解析- 练习一汇总
  • Go学习1:常量、变量的命名
  • 一个投骰子赌大小的游戏
  • 内核等待队列以及用户态的类似机制
  • Chrome DevTools Performance 是优化前端性能的瑞士军刀
  • CD73.【C++ Dev】map和set练习题1(有效的括号、复杂链表的复制)
  • 嵌入式C学习笔记之编码规范
  • Nginx实现P2P视频通话
  • 现代C++特性 并发编程:线程管理库 <thread>(C++11)
  • 狂神说--Nginx--通俗易懂
  • 【秋招笔试】2025.08.31饿了么秋招笔试题
  • Linux基本工具(yum、vim、gcc、Makefile、git、gdb)
  • 苏宁移动端部分首页制作
  • ing Data JPA 派生方法 数据操作速查表
  • TFS-1996《The Possibilistic C-Means Algorithm: Insights and Recommendations》
  • Kafka面试精讲 Day 3:Producer生产者原理与配置
  • K8s学习笔记(一)——
  • Unity转抖音小游戏重点摘记
  • 通信原理(006)——分贝(dB)超级详细
  • 【数学史冷知识】关于行列式的发展史
  • spring-ai-alibaba-deepresearch 学习(七)——源码学习之PlannerNode
  • (树)Leetcode94二叉树的中序遍历
  • 8.29学习总结
  • YOLO 目标检测:YOLOv2基本框架、多尺度训练、锚框、维度聚类、位置预测、passthrough
  • 【机器学习基础】无监督学习算法的现代演进:从数据探索到智能系统的自主发现能力
  • hardhat 3 测试框架选择
  • 十分钟快速掌握 YML YAML 文件