iaas-openstack

发布于 2024-04-15  55 次阅读


内容纲要

OpenStack云基础架构平台软件(私有云平台)

用户手册

V1.0

2023年12月

目 录

简介 [7](#简介)

1 基本环境配置 [8](#基本环境配置)

1.1安装CentOS7说明 [9](#安装centos7说明)

1.2配置网络、主机名 [9](#配置网络主机名)

1.3配置yum源 [11](#配置yum源)

1.4编辑环境变量 [14](#编辑环境变量)

1.5通过脚本安装服务 [16](#特此说明以下脚本执行后对应的具体步骤无需执行已集成到脚本中执行脚本即可执行过程中切记按照节点先后顺序)

1.6安装Openstack包 [16](#安装openstack包)

1.7配置域名解析 [16](#配置域名解析)

1.8安装chrony服务 [17](#安装chrony服务)

1.9通过脚本安装数据库服务 [17](#通过脚本安装数据库服务)

1.10安装Mysql数据库服务 [17](#安装mysql数据库服务)

1.11安装RabbitMQ服务 [19](#安装rabbitmq服务)

1.12安装memcahce服务 [19](#安装memcahce服务)

1.13 安装etcd服务 [19](#安装etcd服务)

2 安装Keystone认证服务 [20](#安装keystone认证服务)

2.1 通过脚本安装keystone服务 [20](#通过脚本安装keystone服务)

2.2安装keystone服务软件包 [20](#安装keystone服务软件包)

2.3创建Keystone数据库 [20](#创建keystone数据库)

2.4配置数据库连接 [20](#配置数据库连接)

2.5为keystone服务创建数据库表 [21](#为keystone服务创建数据库表)

2.6创建令牌 [21](#创建令牌)

2.7创建签名密钥和证书 [21](#创建签名密钥和证书)

2.8定义用户、租户和角色 [24](#定义用户租户和角色)

2.9 创建demo-openrc.sh [25](#创建demo-openrc.sh)

2.10创建admin-openrc.sh [25](#创建admin-openrc.sh)

3 安装Glance镜像服务 [25](#安装glance镜像服务)

3.1 通过脚本安装glance服务 [26](#通过脚本安装glance服务)

3.2 安装Glance镜像服务软件包 [26](#安装glance镜像服务软件包)

3.3创建Glance数据库 [26](#创建glance数据库)

3.4配置数据库连接 [26](#配置数据库连接-1)

3.5为镜像服务创建数据库表 [26](#为镜像服务创建数据库表)

3.6创建用户 [27](#创建用户)

3.7配置镜像服务 [27](#配置镜像服务)

3.8创建Endpoint和API端点 [28](#创建endpoint和api端点)

3.9启动服务 [28](#启动服务)

3.10上传镜像 [28](#上传镜像)

4 安装Nova计算服务 [29](#安装nova计算服务)

4.1通过脚本安装nova服务 [29](#通过脚本安装nova服务)

4.2安装Nova 计算服务软件包 [29](#安装nova-计算服务软件包)

4.3创建Nova数据库 [29](#创建nova数据库)

4.4为计算服务创建数据库表 [30](#为计算服务创建数据库表)

4.5创建用户 [30](#创建用户-1)

4.6配置计算服务 [31](#配置计算服务)

4.7创建Endpoint和API端点 [32](#创建endpoint和api端点-1)

4.8 添加配置 [33](#添加配置)

4.9启动服务 [33](#启动服务-1)

4.10验证Nova数据库是否创建成功 [34](#验证nova数据库是否创建成功)

4.11安装Nova计算服务软件包 [34](#安装nova计算服务软件包)

4.12配置Nova服务 [34](#配置nova服务)

4.13检查系统处理器是否支持虚拟机的硬件加速 [35](#检查系统处理器是否支持虚拟机的硬件加速)

4.14启动 [35](#启动)

4.15 添加计算节点 [36](#添加计算节点)

5 安装Neutron网络服务 [36](#安装neutron网络服务)

5.1通过脚本安装neutron服务 [36](#通过脚本安装neutron服务)

5.2创建Neutron数据库 [36](#创建neutron数据库)

5.3创建用户 [36](#创建用户-2)

5.4创建Endpoint和API端点 [37](#创建endpoint和api端点-2)

5.5安装neutron网络服务软件包 [37](#安装neutron网络服务软件包)

5.6配置Neutron服务 [37](#配置neutron服务)

5.7 创建数据库 [40](#创建数据库)

5.8 启动服务和创建网桥 [40](#启动服务和创建网桥)

5.9 安装软件包 [40](#安装软件包)

5.10 配置Neutron服务 [40](#配置neutron服务-1)

5.11 启动服务进而创建网桥 [42](#启动服务进而创建网桥)

6 安装Dashboard服务 [42](#安装dashboard服务)

6.1通过脚本安装dashboard服务 [42](#通过脚本安装dashboard服务)

6.2安装Dashboard服务软件包 [42](#安装dashboard服务软件包)

6.3配置 [42](#配置)

6.4启动服务 [53](#启动服务-2)

6.5访问 [53](#访问)

6.6创建云主机 [54](#创建云主机)

7 安装Cinder块存储服务 [62](#安装cinder块存储服务)

7.1 通过脚本安装Cinder服务 [62](#通过脚本安装cinder服务)

7.2 安装Cinder块存储服务软件包 [63](#安装cinder块存储服务软件包)

7.3 创建数据库 [63](#创建数据库-1)

7.4 创建用户 [63](#创建用户-3)

7.5 创建Endpoint和API端点 [63](#创建endpoint和api端点-3)

7.6 配置Cinder服务 [64](#配置cinder服务)

7.7 创建数据库 [65](#创建数据库-2)

7.8 启动服务 [65](#启动服务-3)

7.9 安装块存储软件 [65](#安装块存储软件)

7.10 创建LVM物理和逻辑卷 [66](#创建lvm物理和逻辑卷)

7.11 修改Cinder配置文件 [66](#修改cinder配置文件)

7.12 重启服务 [67](#重启服务)

7.13 验证 [67](#验证)

8 安装Swift对象存储服务 [67](#安装swift对象存储服务)

8.1通过脚本安装Swift服务 [67](#通过脚本安装swift服务)

8.2 安装Swift对象存储服务软件包 [68](#安装swift对象存储服务软件包)

8.2创建用户 [68](#创建用户-4)

8.3创建Endpoint和API端点 [68](#创建endpoint和api端点-4)

8.4 编辑/etc/swift/proxy-server.conf [68](#编辑etcswiftproxy-server.conf)

8.5 创建账号、容器、对象 [71](#创建账号容器对象)

8.6 编辑/etc/swift/swift.conf文件 [72](#编辑etcswiftswift.conf文件)

8.7 启动服务和赋予权限 [72](#启动服务和赋予权限)

8.8 安装软件包 [72](#安装软件包-1)

8.9 配置rsync [73](#配置rsync)

8.10 配置账号、容器和对象 [74](#配置账号容器和对象)

8.11 修改Swift配置文件 [76](#修改swift配置文件)

8.12 重启服务和赋予权限 [77](#重启服务和赋予权限)

9 安装Heat编配服务 [78](#安装heat编配服务)

9.1通过脚本安装heat服务 [78](#通过脚本安装heat服务)

9.2安装heat编配服务软件包 [78](#安装heat编配服务软件包)

9.3创建数据库 [78](#创建数据库-3)

9.4创建用户 [78](#创建用户-5)

9.5创建Endpoint和API端点 [79](#创建endpoint和api端点-5)

9.6配置Heat服务 [79](#配置heat服务)

9.7创建数据库 [80](#创建数据库-4)

9.8启动服务 [80](#启动服务-4)

10 安装Zun服务 [81](#安装zun服务)

10.1通过脚本安装Zun服务 [81](#通过脚本安装zun服务)

10.2 安装zun服务软件包 [81](#安装zun服务软件包)

10.3 创建数据库 [81](#创建数据库-5)

10.4 创建用户 [81](#创建用户-6)

10.5 创建Endpoint和API端点 [82](#创建endpoint和api端点-6)

10.6 配置zun服务 [82](#配置zun服务)

10.7 创建数据库 [83](#创建数据库-6)

10.8 启动服务 [83](#启动服务-5)

10.9 安装软件包 [84](#安装软件包-2)

10.10 配置服务 [84](#配置服务)

10.11 修改内核参数 [85](#修改内核参数)

10.12 启动服务 [86](#启动服务-6)

10.13 上传镜像 [86](#上传镜像-1)

10.14 启动容器 [86](#启动容器)

11 安装Ceilometer监控服务 [87](#安装ceilometer监控服务)

11.1通过脚本安装Ceilometer服务 [87](#通过脚本安装ceilometer服务)

11.2 安装Ceilometer监控服务软件包 [87](#安装ceilometer监控服务软件包)

11.3 创建数据库 [87](#创建数据库-7)

11.4 创建用户 [87](#创建用户-7)

11.5 创建Endpoint和API端点 [88](#创建endpoint和api端点-7)

11.6 配置Ceilometer [88](#配置ceilometer)

11.7 创建监听端点 [90](#创建监听端点)

11.8 创建数据库 [91](#创建数据库-8)

11.9 启动服务 [91](#启动服务-7)

11.10 监控组件 [91](#监控组件)

11.11 添加变量参数 [93](#添加变量参数)

11.12 安装软件包 [93](#安装软件包-3)

11.13 配置Ceilometer [93](#配置ceilometer-1)

11.14 启动服务 [94](#启动服务-8)

12 安装Aodh监控服务 [94](#安装aodh监控服务)

12.1通过脚本安装Aodh服务 [94](#通过脚本安装aodh服务)

12.2 创建数据库 [94](#创建数据库-9)

12.3 创建keystone用户 [94](#创建keystone用户)

12.4 创建Endpoint和API [94](#创建endpoint和api)

12.5 安装软件包 [95](#安装软件包-4)

12.6 配置aodh [95](#配置aodh)

12.7 创建监听端点 [96](#创建监听端点-1)

12.8 同步数据库 [97](#同步数据库)

12.9 启动服务 [97](#启动服务-9)

13 添加控制节点资源到云平台 [97](#添加控制节点资源到云平台)

13.1 修改openrc.sh [97](#修改openrc.sh)

13.2 运行iaas-install-nova-compute.sh [97](#运行iaas-install-nova-compute.sh)

简介

IaaS-OpenStack-x86-64_v1.0.iso镜像包含OpenStack T版本私有云平台搭建的各项软件包、依赖包、安装脚本等,同时还提供了CentOS6.5、CentOS7.2、CentOS7.5等云主机qcow2镜像,可满足私有云平台的搭建、云平台的使用、各组件的运维操作等。

IaaS-OpenStack-x86-64_v1.0.iso包含的具体内容如下:


编号 软件包 详细信息


1 iaas-repo 提供安装脚本,可用安装脚本快捷部署OpenStack私有云平台

                   根据iaas-repo镜像源目录,可用于安装KeyStone服务,以及对keystone认证服务进行创建用户、租户、管理权限等操作

                   根据iaas-repo镜像源目录,可用于安装Glance服务,以及对glance服务进行上传镜像、删除镜像、创建快照等操作

                   根据iaas-repo镜像源目录,可用于安装Nova服务,以及对nova服务进行启动云主机、创建云主机类型、删除云主机等操作

                   根据iaas-repo镜像源目录,可用于安装Neutron服务,以及对neutron服务进行创建网络、删除网络、编辑网络等操作

                   根据iaas-repo镜像源目录,可用于安装Horzion服务,可以通过Horzion Dashboard界面对OpenStack平台进行管理

                   根据iaas-repo镜像源目录,可用于安装Cinder服务,以及对Cinder服务进行创建块设备、管理块设备连接、删除块设备等操作

                   根据iaas-repo镜像源目录,可用于安装Swift服务,以及对Swift服务进行创建容器、上传对象、删除对象等操作

                   根据iaas-repo镜像源目录,可用于安装Heat服务,可通过编辑模板文件,实现Heat编排操作

                   根据iaas-repo镜像源目录,可用于安装Ceilometer和Aodh监控服务,可通过这两个服务对私有云平台进行监控与告警

                   根据iaas-repo镜像源目录,可用于安装Zun服务,Zun服务可在OpenStack私有云平台中提供容器服务

2 images 提供CentOS7_1804.tar(容器镜像),可用于Zun服务启动容器镜像

                   提供CentOS_7.5_x86_64_XD.qcow2镜像,该镜像为CentOS7.5版本的虚拟机镜像,可基于该镜像启动CentOS7.5的云主机,用于各项操作与服务搭建

                   提供CentOS_7.2_x86_64_XD.qcow2镜像,该镜像为CentOS7.2版本的虚拟机镜像,可基于该镜像启动CentOS7.2的云主机,用于各项操作与服务搭建

                   提供CentOS_6.5_x86_64_XD.qcow2镜像,该镜像为CentOS6.5版本的虚拟机镜像,可基于该镜像启动CentOS6.5的云主机,用于各项操作与服务搭建

1 基本环境配置

云计算平台的拓扑图如图1所示,IP地址规划如下图所示。


本次搭建采用双节点安装,即controller node控制节点和compute node计算节点。eth0为内部管理网络,eth1为外部网络。存储节点安装操作系统时划分两个空白分区以sda,sdb为例。作为cinder和swift存储磁盘,搭建 ftp服务器作为搭建云平台的yum源。配置文件中密码需要根据实际环境进行配置。

1.1安装CentOS7说明

【CentOS7版本】

CentOS7系统选择1804版本:CentOS-7-x86_64-DVD-1804.iso

【空白分区划分】

CentOS7的安装与CentOS6.5的安装有明显的区别。在CentOS7安装过程中,设置分区都需要一个挂载点,这样一来就无法创建两个空白的磁盘分区作为cinder服务和swift服务的存储磁盘了。

所以我们应该在系统安装过程中留下足够的磁盘大小,系统安装完成后,使用命令fdisk划分新分区,然后使用mkfs.xfs进行文件系统格式化,完成空白分区的划分。具体命令如下:

[root@compute \~]# fdisk /dev/sdb //创建两个分区,sdb1和sdb2,分别是40G

[root@compute \~]# mkfs.xfs /dev/sdb1

[root@compute \~]# mkfs.xfs /dev/sdb2

1.2配置网络、主机名

修改和添加/etc/sysconfig/network-scripts/ifcfg-eth*(具体的网口)文件。

(1)controller节点

配置网络:

eth0: 192.168.100.10

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

IPADDR=192.168.100.10

PREFIX=24

GATEWAY=192.168.100.1

eth1: 192.168.200.10

DEVICE=eth1

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

IPADDR=192.168.200.10

PREFIX=24

配置主机名:

# hostnamectl set-hostname controller

按ctrl+d 退出 重新登陆

(2)compute 节点

配置网络:

eth0: 192.168.100.20

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

IPADDR=192.168.100.20

PREFIX=24

GATEWAY=192.168.100.1

eth1: 192.168.200.20

DEVICE=eth1

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

IPADDR=192.168.200.20

PREFIX=24

配置主机名:

# hostnamectl set-hostname compute

按快捷键ctrl+d 退出终端,重新登陆生效

(3)配置主机名和IP映射 (controller/compute)

[root@controller \~]# vi /etc/hosts

添加下面两行内容

192.168.100.10 controller

192.168.100.20 compute

(4)配置controller节点免秘钥登录compute节点

[root@controller \~]# ssh-keygen #一直按回车键

[root@controller \~]# ssh-copy-id -i 192.168.100.20 #按回车,然后输入"yes",最后输入compute节点的密码

测试登录到compute节点

[root@controller \~]# ssh 192.168.100.20

[root@compute \~]# 按快捷键"ctrl+d"退出

1.3配置yum源

# controller和compute节点

(1)yum源备份

mv /etc/yum.repos.d/* /opt/

(2)创建repo文件

【controller】

在/etc/yum.repos.d创建centos.repo源文件

vi /etc/yum.repos.d/centos.repo

[centos]

name=centos

baseurl=file:///opt/centos

gpgcheck=0

enabled=1

[iaas]

name=iaas

baseurl=file:///opt/iaas-repo

gpgcheck=0

enabled=1

【compute】

在/etc/yum.repos.d创建centos.repo源文件

vi /etc/yum.repos.d/centos.repo

[centos]

name=centos

baseurl=ftp://192.168.100.10/centos

gpgcheck=0

enabled=1

[iaas]

name=iaas

baseurl=ftp://192.168.100.10/iaas-repo

gpgcheck=0

enabled=1

(3)挂载iso文件,将两个镜像上传到控制节点的/root目录下

【挂载CentOS-7-x86_64-DVD-1804.iso】

[root@controller \~]# ls

anaconda-ks.cfg CentOS-7-x86_64-DVD-1804.iso IaaS-OpenStack-x86-64_v1.0.iso

[root@controller \~]# mount -o loop CentOS-7-x86_64-DVD-1804.iso /mnt/

[root@controller \~]# mkdir /opt/centos

[root@controller \~]# cp -rvf /mnt/* /opt/centos/

[root@controller \~]# umount /mnt/

【挂载IaaS-OpenStack-x86-64_v1.0.iso】

[root@controller \~]# mount -o loop IaaS-OpenStack-x86-64_v1.0.iso /mnt/

[root@controller \~]# cp -rvf /mnt/* /opt/

[root@controller \~]# umount /mnt/

(4)搭建ftp服务器,开启并设置自启

[root@controller \~]# yum install vsftpd -y

[root@controller \~]# vi /etc/vsftpd/vsftpd.conf

添加anon_root=/opt/

保存退出

[root@controller \~]# systemctl start vsftpd

[root@controller \~]# systemctl enable vsftpd

(5)配置防火墙和Selinux

【controller/compute】

编辑selinux文件

# vi /etc/selinux/config

SELINUX=permissive

关闭防火墙并设置开机不自启

# systemctl stop firewalld.service

# systemctl disable firewalld.service

# yum remove -y NetworkManager firewalld

# yum -y install iptables-services

# systemctl enable iptables

# systemctl restart iptables

#

# iptables -X

# iptables -Z

# service iptables save

(6)清除缓存,验证yum源

【controller/compute】

# yum clean all

# yum list

1.4编辑环境变量

# controller和compute节点

# yum install iaas-openstack -y

编辑文件/etc/iaas-openstack/openrc.sh,此文件是安装过程中的各项参数,根据每项参数上一行的说明及服务器实际情况进行配置。

在命令模式下执行此命令( %s/\^.//g )用于删除#\
在命令模式下执行此命令( %s/PASS=/PASS=000000/g )用于编写PASS

HOST_IP=192.168.100.10

HOST_PASS=000000

HOST_NAME=controller

HOST_IP_NODE=192.168.100.20

HOST_PASS_NODE=000000

HOST_NAME_NODE=compute

network_segment_IP=192.168.100.0/24

RABBIT_USER=openstack

RABBIT_PASS=000000

DB_PASS=000000

DOMAIN_NAME=demo

ADMIN_PASS=000000

DEMO_PASS=000000

KEYSTONE_DBPASS=000000

GLANCE_DBPASS=000000

GLANCE_PASS=000000

NOVA_DBPASS=000000

NOVA_PASS=000000

NEUTRON_DBPASS=000000

NEUTRON_PASS=000000

METADATA_SECRET=000000

INTERFACE_IP=192.168.100.10/192.168.100.20(controllerIP/computeIP)

INTERFACE_NAME=eth1 (外部网络网卡名称)

Physical_NAME=provider (外部网络适配器名称)

minvlan=101 (vlan网络范围的第一个vlanID)

maxvlan=200 (vlan网络范围的最后一个vlanID)

CINDER_DBPASS=000000

CINDER_PASS=000000

BLOCK_DISK=sdb1 (空白分区)

SWIFT_PASS=000000

OBJECT_DISK=sdb2 (空白分区)

STORAGE_LOCAL_NET_IP=192.168.100.20

HEAT_DBPASS=000000

HEAT_PASS=000000

ZUN_DBPASS=000000

ZUN_PASS=000000

KURYR_DBPASS=000000

KURYR_PASS=000000

CEILOMETER_DBPASS=000000

CEILOMETER_PASS=000000

AODH_DBPASS=000000

AODH_PASS=000000

特此说明:以下脚本执行后,对应的具体步骤无需执行,已集成到脚本中,执行脚本即可,执行过程中切记按照节点先后顺序!!!

1.5通过脚本安装服务

1.6-1.8的基础配置操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

# Controller节点和Compute节点

执行脚本iaas-pre-host.sh进行安装

[root@controller \~]# iaas-pre-host.sh

# 安装完成后,执行ctrl+d,重新登录终端进行生效

[root@controller \~]# ctrl+d

1.6安装Openstack包

# controller和compute节点

# yum -y install openstack-utils openstack-selinux python-openstackclient

# yum upgrade

1.7配置域名解析

修改/etc/hosts添加一下内容

(1)controller 节点

192.168.100.10 controller

192.168.100.20 compute

  1. compute 节点

192.168.100.10 controller

192.168.100.20 compute

1.8安装chrony服务

(1)controller和compute节点

# yum install -y chrony

(2)配置controller节点

编辑/etc/chrony.conf文件

添加以下内容(删除默认sever规则)

server controller iburst

allow 192.168.100.0/24

local stratum 10

启动ntp服务器

# systemctl restart chronyd

# systemctl enable chronyd

(3)配置compute节点

编辑/etc/chrony.conf文件

添加以下内容(删除默认sever规则)

server controller iburst

启动ntp服务器

# systemctl restart chronyd

# systemctl enable chronyd

1.9通过脚本安装数据库服务

1.10-1.13基础服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

# Controller节点

执行脚本iaas-install-mysql.sh进行安装

1.10安装Mysql数据库服务

(1)安装mysql服务

# yum install -y mariadb mariadb-server python2-PyMySQL

(2)修改mysql配置文件参数

修改 /etc/my.cnf文件[mysqld]中添加

max_connections=10000

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = \'SET NAMES utf8\'

character-set-server = utf8

(3)启动服务

systemctl enable mariadb.service

systemctl start mariadb.service

(4)修改/usr/lib/systemd/system/mariadb.service文件参数

[Service]

新添加两行如下参数:

LimitNOFILE=10000

LimitNPROC=10000

(5)修改/etc/my.cnf.d/auth_gssapi.cnf文件参数

[mariadb]

注释一行参数

#plugin-load-add=auth_gssapi.so

(6)重新加载系统服务,并重启mariadb服务

# systemctl daemon-reload

# service mariadb restart

(7)配置Mysql

# mysql_secure_installation

按enter确认后设置数据库root密码

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] n

Remove test database and access to it? [Y/n] y

Reload privilege tables now? [Y/n] y

(8)compute节点

yum -y install MySQL-python

1.11安装RabbitMQ服务

# yum install -y rabbitmq-server

# systemctl enable rabbitmq-server.service

# systemctl restart rabbitmq-server.service

# rabbitmqctl add_user \$RABBIT_USER \$RABBIT_PASS

# rabbitmqctl set_permissions \$RABBIT_USER \".*\" \".*\" \".*\"

1.12安装memcahce服务

# yum install memcached python-memcached

# systemctl enable memcached.service

# systemctl restart memcached.service

1.13 安装etcd服务

# yum install etcd --y

(1)修改/etc/etcd/etcd.conf配置文件,添加以下内容:

ETCD_LISTEN_PEER_URLS=\"http://192.168.100.10:2380\"

ETCD_LISTEN_CLIENT_URLS=\"http://192.168.100.10:2379\"

ETCD_NAME=\"controller\"

ETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://192.168.100.10:2380\"

ETCD_ADVERTISE_CLIENT_URLS=\"http://192.168.100.10:2379\"

ETCD_INITIAL_CLUSTER=\"controller=http://192.168.100.10:2380\"

ETCD_INITIAL_CLUSTER_TOKEN=\"etcd-cluster-01\"

ETCD_INITIAL_CLUSTER_STATE=\"new\"

(2)启动服务

# systemctl start etcd

# systemctl enable etcd

2 安装Keystone认证服务

#Controller

2.1 通过脚本安装keystone服务

2.2-2.10的认证服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

# Controller节点

执行脚本iaas-install-keystone.sh进行安装。

2.2安装keystone服务软件包

yum install -y openstack-keystone httpd mod_wsgi

2.3创建Keystone数据库

# mysql --u root -p(此处数据库密码为之前安装Mysql设置的密码)mysql> CREATE DATABASE keystone;mysql> GRANT ALL PRIVILEGES ON keystone.* TO \'keystone\'@\'localhost\' IDENTIFIED BY \'KEYSTONE_DBPASS\';mysql> GRANT ALL PRIVILEGES ON keystone.* TO \'keystone\'@\'%\' IDENTIFIED BY \'KEYSTONE_DBPASS\';mysql> exit

2.4配置数据库连接

# crudini --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:\$KEYSTONE_DBPASS@\$HOST_NAME/keystone

2.5为keystone服务创建数据库表

# su -s /bin/sh -c \"keystone-manage db_sync\" keystone

2.6创建令牌

ADMIN_TOKEN=\$(openssl rand -hex 10)

# crudini --set /etc/keystone/keystone.conf DEFAULT admin_token \$ADMIN_TOKEN

# crudini --set /etc/keystone/keystone.conf token provider fernet

2.7创建签名密钥和证书

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

修改/etc/httpd/conf/httpd.conf配置文件将ServerName www.example.com:80 替换为ServerName controller

创建/etc/httpd/conf.d/wsgi-keystone.conf文件,内容如下:

Listen 5000

Listen 35357

\<VirtualHost *:5000>

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

LimitRequestBody 114688

\<IfVersion >= 2.4>

ErrorLogFormat \"%{cu}t %M\"

\</IfVersion>

ErrorLog /var/log/httpd/keystone.log

CustomLog /var/log/httpd/keystone_access.log combined

\<Directory /usr/bin>

\<IfVersion >= 2.4>

Require all granted

\</IfVersion>

\<IfVersion \< 2.4>

Order allow,deny

Allow from all

\</IfVersion>

\</Directory>

\</VirtualHost>

\<VirtualHost *:35357>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

LimitRequestBody 114688

\<IfVersion >= 2.4>

ErrorLogFormat \"%{cu}t %M\"

\</IfVersion>

ErrorLog /var/log/httpd/keystone.log

CustomLog /var/log/httpd/keystone_access.log combined

\<Directory /usr/bin>

\<IfVersion >= 2.4>

Require all granted

\</IfVersion>

\<IfVersion \< 2.4>

Order allow,deny

Allow from all

\</IfVersion>

\</Directory>

\</VirtualHost>

Alias /identity /usr/bin/keystone-wsgi-public

\<Location /identity>

SetHandler wsgi-script

Options +ExecCGI

WSGIProcessGroup keystone-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

\</Location>

Alias /identity_admin /usr/bin/keystone-wsgi-admin

\<Location /identity_admin>

SetHandler wsgi-script

Options +ExecCGI

WSGIProcessGroup keystone-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

\</Location>

systemctl enable httpd.service

systemctl start httpd.service

2.8定义用户、租户和角色

(1)设置环境变量

export OS_TOKEN=\$ADMIN_TOKEN

export OS_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

(2)创建keystone相关内容

openstack service create --name keystone --description \"OpenStack Identity\" identity

openstack endpoint create --region RegionOne identity public http://\$HOST_NAME:5000/v3

openstack endpoint create --region RegionOne identity internal http://\$HOST_NAME:5000/v3

openstack endpoint create --region RegionOne identity admin http://\$HOST_NAME:35357/v3

openstack domain create --description \"Default Domain\" \$DOMAIN_NAME

openstack project create --domain \$DOMAIN_NAME --description \"Admin Project\" admin

openstack user create --domain \$DOMAIN_NAME --password \$ADMIN_PASS admin

openstack role create admin

openstack role add --project admin --user admin admin

openstack project create --domain \$DOMAIN_NAME --description \"Service Project\" service

openstack project create --domain \$DOMAIN_NAME --description \"Demo Project\" demo

openstack user create --domain \$DOMAIN_NAME --password \$DEMO_PASS demo

openstack role create user

openstack role add --project demo --user demo user

(3)清除环境变量

unset OS_TOKEN OS_URL

2.9创建demo-openrc.sh

创建demo环境变量demo-openrc.sh

export OS_PROJECT_DOMAIN_NAME=\$DOMAIN_NAME

export OS_USER_DOMAIN_NAME=\$DOMAIN_NAME

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=\$DEMO_PASS

export OS_AUTH_URL=http://\$HOST_NAME:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

2.10创建admin-openrc.sh

创建admin环境变量admin-openrc.sh

export OS_PROJECT_DOMAIN_NAME=\$DOMAIN_NAME

export OS_USER_DOMAIN_NAME=\$DOMAIN_NAME

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=\$ADMIN_PASS

export OS_AUTH_URL=http://\$HOST_NAME:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

生效环境变量

source admin-openrc.sh

3 安装Glance镜像服务

#Controller

3.1 通过脚本安装glance服务

3.2-3.9的镜像服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

# Controller 节点

执行脚本iaas-install-glance.sh进行安装

3.2 安装Glance镜像服务软件包

# yum install -y openstack-glance

3.3创建Glance数据库

mysql -u root -p

mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO \'glance\'@\'localhost\' IDENTIFIED BY \'GLANCE_DBPASS\';

mysql> GRANT ALL PRIVILEGES ON glance.* TO \'glance\'@\'%\' IDENTIFIED BY \'GLANCE_DBPASS\';

3.4配置数据库连接

# crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:\$GLANCE_DBPASS@\$HOST_NAME/glance

# crudini --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:\$GLANCE_DBPASS@\$HOST_NAME/glance

3.5为镜像服务创建数据库表

# su -s /bin/sh -c \"glance-manage db_sync\" glance

3.6创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$GLANCE_PASS glance

# openstack role add --project service --user glance admin

3.7配置镜像服务

# crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:\$GLANCE_DBPASS@\$HOST_NAME/glance

# crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/glance/glance-api.conf keystone_authtoken auth_type password

# crudini --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/glance/glance-api.conf keystone_authtoken project_name service

# crudini --set /etc/glance/glance-api.conf keystone_authtoken username glance

# crudini --set /etc/glance/glance-api.conf keystone_authtoken password \$GLANCE_PASS

# crudini --set /etc/glance/glance-api.conf paste_deploy flavor keystone

# crudini --set /etc/glance/glance-api.conf glance_store stores file,http

# crudini --set /etc/glance/glance-api.conf glance_store default_store file

# crudini --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

# crudini --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:\$GLANCE_DBPASS@\$HOST_NAME/glance

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken project_name service

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken username glance

# crudini --set /etc/glance/glance-registry.conf keystone_authtoken password \$GLANCE_PASS

# crudini --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

3.8创建Endpoint和API端点

# openstack service create --name glance --description \"OpenStack Image\" image

# openstack endpoint create --region RegionOne image public http://\$HOST_NAME:9292

# openstack endpoint create --region RegionOne image internal http://\$HOST_NAME:9292

# openstack endpoint create --region RegionOne image admin http://\$HOST_NAME:9292

3.9启动服务

systemctl enable openstack-glance-api.service openstack-glance-registry.service

systemctl start openstack-glance-api.service openstack-glance-registry.service

3.10上传镜像

在controller中操作:

首先下载(Wget)提供的系统镜像到本地,本次以上传CentOS_7.5_x86_64镜像为例。

可以安装Wget,从Ftp服务器上下载镜像到本地,注意:目前环境Linux系统上是有这个镜像的。

[root@controller \~]# source /etc/keystone/admin-openrc.sh

[root@controller \~]# glance image-create --name \"CentOS7.5\" --disk-format qcow2 --container-format bare --progress \< /opt/images/CentOS_7.5_x86_64_XD.qcow2

4 安装Nova计算服务

#Controller

4.1通过脚本安装nova服务

4.2-4.15计算服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-nova-controller.sh进行安装

#Compute节点

执行脚本iaas-install-nova-compute.sh进行安装,注意安装过程中需输入控制节点机器的密码

4.2安装Nova 计算服务软件包

# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

4.3创建Nova数据库

# mysql -u root -p

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO \'nova\'@\'localhost\' IDENTIFIED BY \'NOVA_DBPASS\';

mysql> GRANT ALL PRIVILEGES ON nova.* TO \'nova\'@\'%\' IDENTIFIED BY \'NOVA_DBPASS\';

mysql> create database IF NOT EXISTS nova_api;

mysql> GRANT ALL PRIVILEGES ON nova_api.* TO \'nova\'@\'localhost\' IDENTIFIED BY \'NOVA_DBPASS\' ;

mysql> GRANT ALL PRIVILEGES ON nova_api.* TO \'nova\'@\'%\' IDENTIFIED BY \'NOVA_DBPASS\' ;

mysql> create database IF NOT EXISTS nova_cell0;

mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO \'nova\'@\'localhost\' IDENTIFIED BY \'NOVA_DBPASS\' ;

mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO \'nova\'@\'%\' IDENTIFIED BY \'NOVA_DBPASS\' ;

修改数据库连接

# crudini --set /etc/nova/nova.conf database connection mysql+pymysql://nova:\$NOVA_DBPASS@\$HOST_NAME/nova

# crudini --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:\$NOVA_DBPASS@\$HOST_NAME/nova_api

4.4为计算服务创建数据库表

# su -s /bin/sh -c \"nova-manage api_db sync\" nova

# su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova

# su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova

# su -s /bin/sh -c \"nova-manage db sync\" nova

4.5创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$NOVA_PASS nova

# openstack role add --project service --user nova admin

4.6配置计算服务

# crudini --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

# crudini --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:\$NOVA_DBPASS@\$HOST_NAME

# crudini --set /etc/nova/nova.conf DEFAULT my_ip \$HOST_IP

# crudini --set /etc/nova/nova.conf DEFAULT use_neutron True

# crudini --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

#

# crudini --set /etc/nova/nova.conf api auth_strategy keystone

#

# crudini --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:\$NOVA_DBPASS@\$HOST_NAME/nova_api

#

# crudini --set /etc/nova/nova.conf database connection mysql+pymysql://nova:\$NOVA_DBPASS@\$HOST_NAME/nova

#

# crudini --set /etc/nova/nova.conf keystone_authtoken auth_url http://\$HOST_NAME:5000/v3

# crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password

# crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf keystone_authtoken project_name service

# crudini --set /etc/nova/nova.conf keystone_authtoken username nova

# crudini --set /etc/nova/nova.conf keystone_authtoken password \$NOVA_PASS

#

# crudini --set /etc/nova/nova.conf vnc enabled true

# crudini --set /etc/nova/nova.conf vnc server_listen \$HOST_IP

# crudini --set /etc/nova/nova.conf vnc server_proxyclient_address \$HOST_IP

#

# crudini --set /etc/nova/nova.conf glance api_servers http://\$HOST_NAME:9292

#

# crudini --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

#

# crudini --set /etc/nova/nova.conf placement os_region_name RegionOne

# crudini --set /etc/nova/nova.conf placement project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf placement project_name service

# crudini --set /etc/nova/nova.conf placement auth_type password

# crudini --set /etc/nova/nova.conf placement user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf placement auth_url http://\$HOST_NAME:5000/v3

# crudini --set /etc/nova/nova.conf placement username placement

# crudini --set /etc/nova/nova.conf placement password \$NOVA_PASS

4.7创建Endpoint和API端点

# openstack service create --name nova --description \"OpenStack Compute\" compute

# openstack endpoint create --region RegionOne compute public http://\$HOST_NAME:8774/v2.1

# openstack endpoint create --region RegionOne compute internal http://\$HOST_NAME:8774/v2.1

# openstack endpoint create --region RegionOne compute admin http://\$HOST_NAME:8774/v2.1

# openstack user create --domain \$DOMAIN_NAME --password \$NOVA_PASS placement

# openstack role add --project service --user placement admin

# openstack service create --name placement --description \"Placement API\" placement

# openstack endpoint create --region RegionOne placement public http://\$HOST_NAME:8778

# openstack endpoint create --region RegionOne placement internal http://\$HOST_NAME:8778

# openstack endpoint create --region RegionOne placement admin http://\$HOST_NAME:8778

4.8 添加配置

在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置

\<Directory /usr/bin>

\<IfVersion >= 2.4>

Require all granted

\</IfVersion>

\<IfVersion \< 2.4>

Order allow,deny

Allow from all

\</IfVersion>

\</Directory>

4.9启动服务

# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl restart httpd memcached

4.10验证Nova数据库是否创建成功

# nova-manage cell_v2 list_cells

#Compute

4.11安装Nova计算服务软件包

# yum install openstack-nova-compute -y

4.12配置Nova服务

# crudini --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

# crudini --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:\$NOVA_DBPASS@\$HOST_NAME

# crudini --set /etc/nova/nova.conf DEFAULT my_ip \$HOST_IP_NODE

# crudini --set /etc/nova/nova.conf DEFAULT use_neutron True

# crudini --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

# crudini --set /etc/nova/nova.conf api auth_strategy keystone

# crudini --set /etc/nova/nova.conf keystone_authtoken auth_url http://\$HOST_NAME:5000/v3

# crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password

# crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf keystone_authtoken project_name service

# crudini --set /etc/nova/nova.conf keystone_authtoken username nova

# crudini --set /etc/nova/nova.conf keystone_authtoken password \$NOVA_PASS

# crudini --set /etc/nova/nova.conf vnc enabled True

# crudini --set /etc/nova/nova.conf vnc server_listen 0.0.0.0

# crudini --set /etc/nova/nova.conf vnc server_proxyclient_address \$HOST_IP_NODE

# crudini --set /etc/nova/nova.conf vnc novncproxy_base_url http://\$HOST_IP:6080/vnc_auto.html

# crudini --set /etc/nova/nova.conf glance api_servers http://\$HOST_NAME:9292

# crudini --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

# crudini --set /etc/nova/nova.conf placement os_region_name RegionOne

# crudini --set /etc/nova/nova.conf placement project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf placement project_name service

# crudini --set /etc/nova/nova.conf placement auth_type password

# crudini --set /etc/nova/nova.conf placement user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf placement auth_url http://\$HOST_NAME:5000/v3

# crudini --set /etc/nova/nova.conf placement username placement

# crudini --set /etc/nova/nova.conf placement password \$NOVA_PASS

4.13检查系统处理器是否支持虚拟机的硬件加速

执行命令

egrep -c \'(vmx|svm)\' /proc/cpuinfo

(1)如果该命令返回一个1或更大的值,说明你的系统支持硬件加速,通常不需要额外的配置。

(2)如果这个指令返回一个0值,说明你的系统不支持硬件加速,你必须配置libvirt取代KVM来使用QEMU。

# crudini --set /etc/nova/nova.conf libvirt virt_type qemu

4.14启动

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

4.15 添加计算节点

#controller

# su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova

5 安装Neutron网络服务

#Controller节点

5.1通过脚本安装neutron服务

5.2-5.11网络服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-neutron-controller.sh进行安装

#Compute节点

执行脚本iaas-install-neutron-compute.sh进行安装

5.2创建Neutron数据库

mysql -u root -p

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL PRIVILEGES ON neutron.* TO \'neutron\'@\'localhost\' IDENTIFIED BY \'\$NEUTRON_DBPASS\';

mysql> GRANT ALL PRIVILEGES ON neutron.* TO \'neutron\'@\'%\' IDENTIFIED BY \'\$NEUTRON_DBPASS\';

5.3创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$NEUTRON_PASS neutron

# openstack role add --project service --user neutron admin

5.4创建Endpoint和API端点

# openstack service create --name neutron --description \"OpenStack Networking\" network

# openstack endpoint create --region RegionOne network public http://\$HOST_NAME:9696

# openstack endpoint create --region RegionOne network internal http://\$HOST_NAME:9696

# openstack endpoint create --region RegionOne network admin http://\$HOST_NAME:9696

5.5安装neutron网络服务软件包

# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

5.6配置Neutron服务

# crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2

# crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins router

# crudini --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true

# crudini --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:\$NEUTRON_DBPASS@\$HOST_NAME

# crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

# crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true

# crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true

# crudini --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:\$NEUTRON_DBPASS@\$HOST_NAME/neutron

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://\$HOST_NAME:35357

# crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

# crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service

# crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron

# crudini --set /etc/neutron/neutron.conf keystone_authtoken password \$NEUTRON_PASS

# crudini --set /etc/neutron/neutron.conf nova auth_url http://\$HOST_NAME:35357

# crudini --set /etc/neutron/neutron.conf nova auth_type password

# crudini --set /etc/neutron/neutron.conf nova project_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf nova user_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf nova region_name RegionOne

# crudini --set /etc/neutron/neutron.conf nova project_name service

# crudini --set /etc/neutron/neutron.conf nova username nova

# crudini --set /etc/neutron/neutron.conf nova password \$NOVA_PASS

# crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks \$Physical_NAME

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges \$Physical_NAME:\$minvlan:\$maxvlan

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges \$minvlan:\$maxvlan

# crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings \$Physical_NAME:\$INTERFACE_NAME

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip \$INTERFACE_IP

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

# crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

# crudini --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge

# crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq

# crudini --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

# #/etc/neutron/metadata_agent.ini

# crudini --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host \$HOST_NAME

# crudini --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret \$METADATA_SECRET

# crudini --set /etc/nova/nova.conf neutron url http://\$HOST_NAME:9696

# crudini --set /etc/nova/nova.conf neutron auth_url http://\$HOST_NAME:35357

# crudini --set /etc/nova/nova.conf neutron auth_type password

# crudini --set /etc/nova/nova.conf neutron project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf neutron user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf neutron region_name RegionOne

# crudini --set /etc/nova/nova.conf neutron project_name service

# crudini --set /etc/nova/nova.conf neutron username neutron

# crudini --set /etc/nova/nova.conf neutron password \$NEUTRON_PASS

# crudini --set /etc/nova/nova.conf neutron service_metadata_proxy true

# crudini --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret \$METADATA_SECRET

5.7 创建数据库

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

# su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron

5.8 启动服务和创建网桥

systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

#Compute节点

5.9 安装软件包

# yum install openstack-neutron-linuxbridge ebtables ipset net-tools -y

5.10 配置Neutron服务

# crudini --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:\$NEUTRON_DBPASS@\$HOST_NAME

# crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://\$HOST_NAME:35357

# crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

# crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service

# crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron

# crudini --set /etc/neutron/neutron.conf keystone_authtoken password \$NEUTRON_PASS

# crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:\$INTERFACE_NAME

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip \$INTERFACE_IP

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true

# crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

# crudini --set /etc/nova/nova.conf neutron url http://\$HOST_NAME:9696

# crudini --set /etc/nova/nova.conf neutron auth_url http://\$HOST_NAME:35357

# crudini --set /etc/nova/nova.conf neutron auth_type password

# crudini --set /etc/nova/nova.conf neutron project_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf neutron user_domain_name \$DOMAIN_NAME

# crudini --set /etc/nova/nova.conf neutron region_name RegionOne

# crudini --set /etc/nova/nova.conf neutron project_name service

# crudini --set /etc/nova/nova.conf neutron username neutron

# crudini --set /etc/nova/nova.conf neutron password \$NEUTRON_PASS

5.11 启动服务进而创建网桥

# systemctl restart openstack-nova-compute.service

# systemctl start neutron-linuxbridge-agent.service

# systemctl enable neutron-linuxbridge-agent.service

6 安装Dashboard服务

6.1通过脚本安装dashboard服务

6.2-6.4dashboard的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller

执行脚本iaas-install-dashboard.sh进行安装

6.2安装Dashboard服务软件包

# yum install openstack-dashboard --y

6.3配置

修改/etc/openstack-dashboard/local_settings内容如下

修改

import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['*', 'two.example.com']
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
LOCAL_PATH = '/tmp'
SECRET_KEY='31880d3983dd796f54c8'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
    },
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': False,
    'requires_keypair': False,
    'enable_quotas': True
}
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': False,
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': True,
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
OPENSTACK_HEAT_STACK = {
    'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "UTC"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'console': {
            'format': '%(levelname)s %(name)s %(message)s'
        },
        'operation': {
            'format': '%(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'console',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneauth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'oslo_policy': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'urllib3': {
            'handlers': ['null'],
            'propagate': False,
        },
        'chardet.charsetprober': {
            'handlers': ['null'],
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS',
                              'OPENSTACK_IMAGE_FORMATS',
                              'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN',
                              'CREATE_IMAGE_DEFAULTS',
                              'ENFORCE_PASSWORD_CHECK']
ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

6.4启动服务

# systemctl restart httpd.service memcached.service

6.5访问

查看openstack版本

[root@controller \~]# openstack --version

openstack 3.14.3

打开浏览器访问Dashboard

http://controller(或本机内网ip)/dashboard

注:检查防火墙规则,确保允许http服务相关端口通行,或者关闭防火墙。

6.6创建云主机

(1)管理员->资源管理->云主机类型->创建云主机类型

(2)管理员->网络->网络->创建网络

  1. 项目->网络->安全组->管理规则->添加规则(ICMP、TCP、UDP)

(4)项目->计算->实例->创建实例


排障:问题出在上传的镜像和nova服务有问题

compute节点执行:

[root@compute \~]# crudini --set /etc/nova/nova.conf libvirt virt_type qemu

[root@compute \~]# vi /etc/nova/nova.conf

添加下面一行

cpu_mode=node

[root@compute \~]# systemctl enable libvirtd.service openstack-nova-compute.service

[root@compute \~]# systemctl restart libvirtd.service openstack-nova-compute.service

controller节点执行:

[root@controller \~]# ls /var/lib/glance/images/

af255d84-985c-40a3-b3d6-cde2effa77be

[root@controller \~]# openstack image set --property hw_disk_bus=ide --property hw_vif_model=e1000 af255d84-985c-40a3-b3d6-cde2effa77be

新建一个实例


账号:root\
密码:000000


通过xshell连接


https://chenx.top/1713586711-cankao/

7 安装Cinder块存储服务

7.1 通过脚本安装Cinder服务

7.2-7.12块存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller

执行脚本iaas-install-cinder-controller.sh进行安装

#Compute节点

执行脚本**iaas-install-cinder-compute.sh进行安装**

7.2 安装Cinder块存储服务软件包

# yum install openstack-cinder

7.3 创建数据库

# mysql -u root -p

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO \'cinder\'@\'localhost\' IDENTIFIED BY \'\$CINDER_DBPASS\';

mysql> GRANT ALL PRIVILEGES ON cinder.* TO \'cinder\'@\'%\' IDENTIFIED BY \'\$CINDER_DBPASS\';

7.4 创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$CINDER_PASS cinder

# openstack role add --project service --user cinder admin

7.5 创建Endpoint和API端点

# openstack service create --name cinder --description \"OpenStack Block Store\" volume

# openstack service create --name cinderv2 --description \"OpenStack Block Store\" volumev2

# openstack service create --name cinderv3 --description \"OpenStack Block Store\" volumev3

# openstack endpoint create --region RegionOne volume public http://\$HOST_NAME:8776/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volume internal http://\$HOST_NAME:8776/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volume admin http://\$HOST_NAME:8776/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volumev2 public http://\$HOST_NAME:8776/v2/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volumev2 internal http://\$HOST_NAME:8776/v2/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volumev2 admin http://\$HOST_NAME:8776/v2/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volumev3 public http://\$HOST_NAME:8776/v3/%\\(tenant_id\\)s

openstack endpoint create --region RegionOne volumev3 internal http://\$HOST_NAME:8776/v3/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne volumev3 admin http://\$HOST_NAME:8776/v3/%\\(tenant_id\\)s

7.6 配置Cinder服务

# crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:\$CINDER_DBPASS@\$HOST_NAME/cinder

# crudini --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit

# crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host \$HOST_NAME

# crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid \$RABBIT_USER

# crudini --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password \$RABBIT_PASS

# crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://\$HOST_NAME:35357

# crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

# crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service

# crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder

# crudini --set /etc/cinder/cinder.conf keystone_authtoken password \$CINDER_PASS

# crudini --set /etc/cinder/cinder.conf DEFAULT my_ip \$HOST_IP

# crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

# crudini --set /etc/nova/nova.conf cinder os_region_name RegionOne

7.7 创建数据库

# su -s /bin/sh -c \"cinder-manage db sync\" cinder

7.8 启动服务

# systemctl restart openstack-nova-api.service

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

7.9 安装块存储软件

#compute

# yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone -y

# systemctl enable lvm2-lvmetad.service

# systemctl restart lvm2-lvmetad.service

7.10 创建LVM物理和逻辑卷

以磁盘/dev/sda为例

# pvcreate --f /dev/sda

# vgcreate cinder-volumes /dev/sda

7.11 修改Cinder配置文件

# crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:\$CINDER_DBPASS@\$HOST_NAME/cinder

# crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

# crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://\$HOST_NAME:35357

# crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

# crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service

# crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder

# crudini --set /etc/cinder/cinder.conf keystone_authtoken password \$CINDER_PASS

# crudini --set /etc/cinder/cinder.conf DEFAULT my_ip \$HOST_IP_NODE

# crudini --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver

# crudini --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes

# crudini --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi

# crudini --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm

# crudini --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://\$HOST_NAME:9292

# crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

7.12 重启服务

# systemctl enable openstack-cinder-volume.service target.service

# systemctl restart openstack-cinder-volume.service target.service

7.13 验证

#Controller

使用cinder create 创建一个新的卷

# cinder create --display-name myVolume 1

通过cinder list 命令查看是否正确创建

# cinder list

8 安装Swift对象存储服务

8.1通过脚本安装Swift服务

8.2-8.12对象存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller

执行脚本iaas-install-swift-controller.sh进行安装

#Compute节点

执行脚本iaas-install-swift-compute.sh进行安装,注意安装过程中提示输入控制节点密码

8.2 安装Swift对象存储服务软件包

# yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached -y

8.2创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$SWIFT_PASS swift

# openstack role add --project service --user swift admin

8.3创建Endpoint和API端点

# openstack service create --name swift --description \"OpenStack Object Storage\" object-store

# openstack endpoint create --region RegionOne object-store public http://\$HOST_NAME:8080/v1/AUTH\_%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne object-store internal http://\$HOST_NAME:8080/v1/AUTH\_%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne object-store admin http://\$HOST_NAME:8080/v1

8.4 编辑/etc/swift/proxy-server.conf

编辑配置文件如下

[DEFAULT]

bind_port = 8080

swift_dir = /etc/swift

user = swift

[pipeline:main]

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]

use = egg:swift#proxy

account_autocreate = True

[filter:tempauth]

use = egg:swift#tempauth

user_admin_admin = admin .admin .reseller_admin

user_test_tester = testing .admin

user_test2_tester2 = testing2 .admin

user_test_tester3 = testing3

user_test5_tester5 = testing5 service

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

auth_uri = http://\$HOST_NAME:5000

auth_url = http://\$HOST_NAME:35357

memcached_servers = \$HOST_NAME:11211

auth_type = password

project_domain_name = \$DOMAIN_NAME

user_domain_name = \$DOMAIN_NAME

project_name = service

username = swift

password = \$SWIFT_PASS

delay_auth_decision = True

[filter:keystoneauth]

use = egg:swift#keystoneauth

operator_roles = admin,user

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

memcache_servers = \$HOST_NAME:11211

use = egg:swift#memcache

[filter:ratelimit]

use = egg:swift#ratelimit

[filter:domain_remap]

use = egg:swift#domain_remap

[filter:catch_errors]

use = egg:swift#catch_errors

[filter:cname_lookup]

use = egg:swift#cname_lookup

[filter:staticweb]

use = egg:swift#staticweb

[filter:tempurl]

use = egg:swift#tempurl

[filter:formpost]

use = egg:swift#formpost

[filter:name_check]

use = egg:swift#name_check

[filter:list-endpoints]

use = egg:swift#list_endpoints

[filter:proxy-logging]

use = egg:swift#proxy_logging

[filter:bulk]

use = egg:swift#bulk

[filter:slo]

use = egg:swift#slo

[filter:dlo]

use = egg:swift#dlo

[filter:container-quotas]

use = egg:swift#container_quotas

[filter:account-quotas]

use = egg:swift#account_quotas

[filter:gatekeeper]

use = egg:swift#gatekeeper

[filter:container_sync]

use = egg:swift#container_sync

[filter:xprofile]

use = egg:swift#xprofile

[filter:versioned_writes]

use = egg:swift#versioned_writes

8.5 创建账号、容器、对象

存储节点存储磁盘名称以sdb为例

swift-ring-builder account.builder create 18 1 1

swift-ring-builder account.builder add --region 1 --zone 1 --ip \$STORAGE_LOCAL_NET_IP --port 6002 --device \$OBJECT_DISK --weight 100

swift-ring-builder account.builder

swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 1 1

swift-ring-builder container.builder add --region 1 --zone 1 --ip \$STORAGE_LOCAL_NET_IP --port 6001 --device \$OBJECT_DISK --weight 100

swift-ring-builder container.builder

swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 1 1

swift-ring-builder object.builder add --region 1 --zone 1 --ip \$STORAGE_LOCAL_NET_IP --port 6000 --device \$OBJECT_DISK --weight 100

swift-ring-builder object.builder

swift-ring-builder object.builder rebalance

8.6 编辑/etc/swift/swift.conf文件

编辑如下

[swift-hash]

swift_hash_path_suffix = changeme

swift_hash_path_prefix = changeme

[storage-policy:0]

name = Policy-0

default = yes

aliases = yellow, orange

[swift-constraints]

8.7 启动服务和赋予权限

chown -R root:swift /etc/swift

systemctl enable openstack-swift-proxy.service memcached.service

systemctl restart openstack-swift-proxy.service memcached.service

8.8 安装软件包

#Compute节点

存储节点存储磁盘名称以sdb为例

# yum install xfsprogs rsync openstack-swift-account openstack-swift-container openstack-swift-object -y

# mkfs.xfs -i size=1024 -f /dev/sdb

# echo \"/dev/sdb /swift/node xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0\" >> /etc/fstab

# mkdir -p /swift/node/sdb

# mount /dev/sdb /swift/node/sdb

# scp controller:/etc/swift/*.ring.gz /etc/swift/

8.9 配置rsync

(1)编辑/etc/rsyncd.conf文件如下

pid file = /var/run/rsyncd.pid

log file = /var/log/rsyncd.log

uid = swift

gid = swift

address = 127.0.0.1

[account]

path = /swift/node

read only = false

write only = no

list = yes

incoming chmod = 0644

outgoing chmod = 0644

max connections = 25

lock file = /var/lock/account.lock

[container]

path = /swift/node

read only = false

write only = no

list = yes

incoming chmod = 0644

outgoing chmod = 0644

max connections = 25

lock file = /var/lock/container.lock

[object]

path = /swift/node

read only = false

write only = no

list = yes

incoming chmod = 0644

outgoing chmod = 0644

max connections = 25

lock file = /var/lock/object.lock

[swift_server]

path = /etc/swift

read only = true

write only = no

list = yes

incoming chmod = 0644

outgoing chmod = 0644

max connections = 5

lock file = /var/lock/swift_server.lock

(2)启动服务

# systemctl enable rsyncd.service

systemctl restart rsyncd.service

8.10 配置账号、容器和对象

(1)修改/etc/swift/account-server.conf配置文件

[DEFAULT]

bind_port = 6002

user = swift

swift_dir = /etc/swift

devices = /swift/node

mount_check = false

[pipeline:main]

pipeline = healthcheck recon account-server

[app:account-server]

use = egg:swift#account

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift

[account-replicator]

[account-auditor]

[account-reaper]

[filter:xprofile]

use = egg:swift#xprofile

(2)修改/etc/swift/container-server.conf配置文件

[DEFAULT]

bind_port = 6001

user = swift

swift_dir = /etc/swift

devices = /swift/node

mount_check = false

[pipeline:main]

pipeline = healthcheck recon container-server

[app:container-server]

use = egg:swift#container

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:xprofile]

use = egg:swift#xprofile

(3)修改/etc/swift/object-server.conf配置文件

[DEFAULT]

bind_port = 6000

user = swift

swift_dir = /etc/swift

devices = /swift/node

mount_check = false

[pipeline:main]

pipeline = healthcheck recon object-server

[app:object-server]

use = egg:swift#object

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift

recon_lock_path = /var/lock

[object-replicator]

[object-reconstructor]

[object-updater]

[object-auditor]

[filter:xprofile]

use = egg:swift#xprofile

8.11 修改Swift配置文件

修改/etc/swift/swift.conf

[swift-hash]

swift_hash_path_suffix = changeme

swift_hash_path_prefix = changeme

[storage-policy:0]

name = Policy-0

default = yes

aliases = yellow, orange

[swift-constraints]

8.12 重启服务和赋予权限

# chown -R swift:swift /swift/node

# mkdir -p /var/cache/swift

# chown -R root:swift /var/cache/swift

# chmod -R 775 /var/cache/swift

# chown -R root:swift /etc/swift

# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

# systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

# systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

# systemctl restart openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

# systemctl restart openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

9 安装Heat编配服务

# Controller节点

9.1通过脚本安装heat服务

9.2-9.8编配服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-heat.sh进行安装

9.2安装heat编配服务软件包

# yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine openstack-heat-ui -y

9.3创建数据库

# mysql -u root -p

mysql> CREATE DATABASE heat;

mysql> GRANT ALL PRIVILEGES ON heat.* TO \'heat\'@\'localhost\' IDENTIFIED BY \'\$HEAT_DBPASS\'; mysql> GRANT ALL PRIVILEGES ON heat.* TO \'heat\'@\'%\' IDENTIFIED BY \'\$HEAT_DBPASS\';

9.4创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$HEAT_PASS heat

# openstack role add --project service --user heat admin

# openstack domain create --description \"Stack projects and users\" heat

# openstack user create --domain heat --password \$HEAT_PASS heat_domain_admin

# openstack role add --domain heat --user-domain heat --user heat_domain_admin admin

# openstack role create heat_stack_owner

# openstack role add --project demo --user demo heat_stack_owner

# openstack role create heat_stack_user

9.5创建Endpoint和API端点

# openstack service create --name heat --description \"Orchestration\" orchestration

# openstack service create --name heat-cfn --description \"Orchestration\" cloudformation

# openstack endpoint create --region RegionOne orchestration public http://\$HOST_NAME:8004/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne orchestration internal http://\$HOST_NAME:8004/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne orchestration admin http://\$HOST_NAME:8004/v1/%\\(tenant_id\\)s

# openstack endpoint create --region RegionOne cloudformation public http://\$HOST_NAME:8000/v1

# openstack endpoint create --region RegionOne cloudformation internal http://\$HOST_NAME:8000/v1

# openstack endpoint create --region RegionOne cloudformation admin http://\$HOST_NAME:8000/v1

9.6配置Heat服务

# crudini --set /etc/heat/heat.conf database connection mysql+pymysql://heat:\$HEAT_DBPASS@\$HOST_NAME/heat

# crudini --set /etc/heat/heat.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/heat/heat.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/heat/heat.conf keystone_authtoken auth_url http://\$HOST_NAME:35357

# crudini --set /etc/heat/heat.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/heat/heat.conf keystone_authtoken auth_type password

# crudini --set /etc/heat/heat.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/heat/heat.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/heat/heat.conf keystone_authtoken project_name service

# crudini --set /etc/heat/heat.conf keystone_authtoken username heat

# crudini --set /etc/heat/heat.conf keystone_authtoken password \$HEAT_PASS

# crudini --set /etc/heat/heat.conf trustee auth_plugin password

# crudini --set /etc/heat/heat.conf trustee auth_url http://\$HOST_NAME:35357

# crudini --set /etc/heat/heat.conf trustee username heat

# crudini --set /etc/heat/heat.conf trustee password \$HEAT_PASS

# crudini --set /etc/heat/heat.conf trustee user_domain_name \$DOMAIN_NAME

# crudini --set /etc/heat/heat.conf clients_keystone auth_uri http://\$HOST_NAME:35357

# crudini --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://\$HOST_NAME:8000

# crudini --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://\$HOST_NAME:8000/v1/waitcondition

# crudini --set /etc/heat/heat.conf DEFAULT stack_domain_admin heat_domain_admin

# crudini --set /etc/heat/heat.conf DEFAULT stack_domain_admin_password \$HEAT_PASS

# crudini --set /etc/heat/heat.conf DEFAULT stack_user_domain_name heat

9.7创建数据库

# su -s /bin/sh -c \"heat-manage db_sync\" heat

9.8启动服务

# systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

# systemctl restart openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

10 安装Zun服务

10.1通过脚本安装Zun服务

10.2-10.12zun服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-zun-controller.sh进行安装

#Compute节点

执行脚本iaas-install-zun-compute.sh进行安装

10.2 安装zun服务软件包

#Controller节点

# yum install python-pip git openstack-zun openstack-zun-ui --y

10.3 创建数据库

# mysql -u root -p

mysql> CREATE DATABASE zun;

mysql> GRANT ALL PRIVILEGES ON zun.* TO zun@\'localhost\' IDENTIFIED BY \'\$ZUN_DBPASS\'; mysql> GRANT ALL PRIVILEGES ON zun.* TO zun@\'%\' IDENTIFIED BY \'\$ZUN_DBPASS\';

10.4 创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$ZUN_PASS zun

# openstack role add --project service --user zun admin

# openstack user create --domain \$DOMAIN_NAME --password \$KURYR_PASS kuryr

# openstack role add --project service --user kuryr admin

10.5 创建Endpoint和API端点

# openstack service create --name zun --description \"Container Service\" container

# openstack endpoint create --region RegionOne container public http://\$HOST_NAME:9517/v1

# openstack endpoint create --region RegionOne container internal http://\$HOST_NAME:9517/v1

# openstack endpoint create --region RegionOne container admin http://\$HOST_NAME:9517/v1

10.6 配置zun服务

# crudini --set /etc/zun/zun.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/zun/zun.conf DEFAULT log_file /var/log/zun

# crudini --set /etc/zun/zun.conf api host_ip \$HOST_IP

# crudini --set /etc/zun/zun.conf api port 9517

# crudini --set /etc/zun/zun.conf database connection mysql+pymysql://zun:\$ZUN_DBPASS@\$HOST_NAME/zun

# crudini --set /etc/zun/zun.conf keystone_auth memcached_servers \$HOST_NAME:11211

# crudini --set /etc/zun/zun.conf keystone_auth auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_auth project_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_auth project_name service

# crudini --set /etc/zun/zun.conf keystone_auth user_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_auth password \$ZUN_PASS

# crudini --set /etc/zun/zun.conf keystone_auth username zun

# crudini --set /etc/zun/zun.conf keystone_auth auth_url http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_auth auth_type password

# crudini --set /etc/zun/zun.conf keystone_auth auth_version v3

# crudini --set /etc/zun/zun.conf keystone_auth auth_protocol http

# crudini --set /etc/zun/zun.conf keystone_auth service_token_roles_required True

# crudini --set /etc/zun/zun.conf keystone_auth endpoint_type internalURL

# crudini --set /etc/zun/zun.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_authtoken project_name service

# crudini --set /etc/zun/zun.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_authtoken password \$ZUN_PASS

# crudini --set /etc/zun/zun.conf keystone_authtoken username zun

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_type password

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_version v3

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_protocol http

# crudini --set /etc/zun/zun.conf keystone_authtoken service_token_roles_required True

# crudini --set /etc/zun/zun.conf keystone_authtoken endpoint_type internalURL

# crudini --set /etc/zun/zun.conf oslo_concurrency lock_path /var/lib/zun/tmp

# crudini --set /etc/zun/zun.conf oslo_messaging_notifications driver messaging

# crudini --set /etc/zun/zun.conf websocket_proxy wsproxy_host \$HOST_IP

# crudini --set /etc/zun/zun.conf websocket_proxy wsproxy_port 6784

10.7 创建数据库

# su -s /bin/sh -c \"zun-db-manage upgrade\" zun

10.8 启动服务

# systemctl enable zun-api zun-wsproxy

# systemctl restart zun-api zun-wsproxy

# systemctl restart httpd memcached

10.9 安装软件包

#compute节点

# yum install -y yum-utils device-mapper-persistent-data lvm2

# yum install docker-ce python-pip git kuryr-libnetwork openstack-zun-compute --y

10.10 配置服务

# crudini --set /etc/kuryr/kuryr.conf DEFAULT bindir /usr/libexec/kuryr

# crudini --set /etc/kuryr/kuryr.conf neutron auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/kuryr/kuryr.conf neutron auth_url http://\$HOST_NAME:35357

# crudini --set /etc/kuryr/kuryr.conf neutron username kuryr

# crudini --set /etc/kuryr/kuryr.conf neutron user_domain_name \$DOMAIN_NAME

# crudini --set /etc/kuryr/kuryr.conf neutron password \$KURYR_PASS

# crudini --set /etc/kuryr/kuryr.conf neutron project_name service

# crudini --set /etc/kuryr/kuryr.conf neutron project_domain_name \$DOMAIN_NAME

# crudini --set /etc/kuryr/kuryr.conf neutron auth_type password

# crudini --set /etc/zun/zun.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/zun/zun.conf DEFAULT state_path /var/lib/zun

# crudini --set /etc/zun/zun.conf DEFAULT log_file /var/log/zun

# crudini --set /etc/zun/zun.conf database connection mysql+pymysql://zun:\$ZUN_DBPASS@\$HOST_NAME/zun

# crudini --set /etc/zun/zun.conf keystone_auth memcached_servers \$HOST_NAME:11211

# crudini --set /etc/zun/zun.conf keystone_auth auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_auth project_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_auth project_name service

# crudini --set /etc/zun/zun.conf keystone_auth user_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_auth password \$ZUN_PASS

# crudini --set /etc/zun/zun.conf keystone_auth username zun

# crudini --set /etc/zun/zun.conf keystone_auth auth_url http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_auth auth_type password

# crudini --set /etc/zun/zun.conf keystone_auth auth_version v3

# crudini --set /etc/zun/zun.conf keystone_auth auth_protocol http

# crudini --set /etc/zun/zun.conf keystone_auth service_token_roles_required True

# crudini --set /etc/zun/zun.conf keystone_auth endpoint_type internalURL

# crudini --set /etc/zun/zun.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_uri http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_authtoken project_name service

# crudini --set /etc/zun/zun.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/zun/zun.conf keystone_authtoken password \$ZUN_PASS

# crudini --set /etc/zun/zun.conf keystone_authtoken username zun

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/zun/zun.conf keystone_authtoken auth_type password

# crudini --set /etc/zun/zun.conf websocket_proxy base_url ws://\$HOST_NAME:6784/

# crudini --set /etc/zun/zun.conf oslo_concurrency lock_path /var/lib/zun/tmp

# crudini --set /etc/kuryr/kuryr.conf DEFAULT capability_scope global

10.11 修改内核参数

修改/etc/sysctl.conf文件,添加以下内容:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

生效配置

# sysctl --p

10.12 启动服务

# mkdir -p /etc/systemd/system/docker.service.d

修改mkdir -p /etc/systemd/system/docker.service.d文件,添加以下内容:

[Service]

ExecStart=

ExecStart=/usr/bin/dockerd --group zun -H tcp://\$HOST_NAME_NODE:2375 -H unix:///var/run/docker.sock --cluster-store etcd://\$HOST_NAME:2379

# systemctl daemon-reload

# systemctl restart docker

# systemctl enable docker

# systemctl enable kuryr-libnetwork

# systemctl restart kuryr-libnetwork

# systemctl enable zun-compute

# systemctl restart zun-compute

10.13 上传镜像

控制节点执行:

以CentOS7_1804.tar镜像为例,CentOS7_1804.tar镜像包存放在XianDian-IaaS-v2.4.iso镜像包中。将docker镜像上传到glance中,通过openstack使用镜像启动容器。

[root@controller \~]# source /etc/keystone/admin-openrc.sh

[root@controller \~]# openstack image create centos7.5 --public --container-format docker --disk-format raw \< /opt/images/CentOS7_1804.tar

10.14 启动容器

可先不做!

通过glance存储镜像启动容器,控制节点执行

# zun run --image-driver glance centos7.5

# zun list

11 安装Ceilometer监控服务

11.1通过脚本安装Ceilometer服务

11.2-11.14ceilometer监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-ceilometer-controller.sh进行安装

#Compute节点

执行脚本iaas-install-ceilometer-compute.sh进行安装

11.2 安装Ceilometer监控服务软件包

#Controller节点

# yum install openstack-gnocchi-api openstack-gnocchi-metricd python2-gnocchiclient openstack-ceilometer-notification openstack-ceilometer-central python2-ceilometerclient python-ceilometermiddleware -y

11.3 创建数据库

# mysql -u root -p

mysql> CREATE DATABASE gnocchi;

mysql> GRANT ALL PRIVILEGES ON gnocchi.* TO gnocchi@\'localhost\' IDENTIFIED BY \'\$CEILOMETER_DBPASS\'; mysql> GRANT ALL PRIVILEGES ON gnocchi.* TO gnocchi@\'%\' IDENTIFIED BY \'\$CEILOMETER_DBPASS\';

11.4 创建用户

# openstack user create --domain \$DOMAIN_NAME --password \$CEILOMETER_PASS ceilometer

# openstack role add --project service --user ceilometer admin

# openstack user create --domain \$DOMAIN_NAME --password \$CEILOMETER_PASS gnocchi

# openstack role add --project service --user gnocchi admin

# openstack role create ResellerAdmin

# openstack role add --project service --user ceilometer ResellerAdmin

11.5 创建Endpoint和API端点

# openstack service create --name ceilometer --description \"OpenStack Telemetry Service\" metering

# openstack service create --name gnocchi --description \"Metric Service\" metric

# openstack endpoint create --region RegionOne metric public http://\$HOST_NAME:8041

# openstack endpoint create --region RegionOne metric internal http://\$HOST_NAME:8041

# openstack endpoint create --region RegionOne metric admin http://\$HOST_NAME:8041

11.6 配置Ceilometer

# crudini --set /etc/gnocchi/gnocchi.conf DEFAULT log_dir /var/log/gnocchi

# crudini --set /etc/gnocchi/gnocchi.conf api auth_mode keystone

# crudini --set /etc/gnocchi/gnocchi.conf database backend sqlalchemy

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken auth_type password

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken www_authenticate_uri http://\$HOST_NAME:5000

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken project_name service

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken username gnocchi

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken password \$CEILOMETER_PASS

# crudini --set /etc/gnocchi/gnocchi.conf keystone_authtoken service_token_roles_required true

# crudini --set /etc/gnocchi/gnocchi.conf indexer url mysql+pymysql://gnocchi:\$CEILOMETER_DBPASS@\$HOST_NAME/gnocchi

# crudini --set /etc/gnocchi/gnocchi.conf storage file_basepath /var/lib/gnocchi

# crudini --set /etc/gnocchi/gnocchi.conf storage driver file

# crudini --set /etc/ceilometer/ceilometer.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/ceilometer/ceilometer.conf api auth_mode keystone

# crudini --set /etc/ceilometer/ceilometer.conf dispatcher_gnocchi filter_service_activity False

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken www_authenticate_uri = http://\$HOST_NAME:5000

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url = http://\$HOST_NAME:5000

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken memcached_servers = \$HOST_NAME:11211

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_type = password

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_domain_name = \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken user_domain_name = \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_name = service

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken username = gnocchi

# crudini --set /etc/ceilometer/ceilometer.conf keystone_authtoken password = \$CEILOMETER_PASS

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://\$HOST_NAME:5000

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials memcached_servers \$HOST_NAME:11211

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_name service

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials password \$CEILOMETER_PASS

11.7 创建监听端点

创建/etc/httpd/conf.d/10-gnocchi_wsgi.conf文件,添加以下内容:

Listen 8041

\<VirtualHost *:8041>

DocumentRoot /var/www/cgi-bin/gnocchi

\<Directory /var/www/cgi-bin/gnocchi>

AllowOverride None

Require all granted

\</Directory>

CustomLog /var/log/httpd/gnocchi_wsgi_access.log combined

ErrorLog /var/log/httpd/gnocchi_wsgi_error.log

SetEnvIf X-Forwarded-Proto https HTTPS=1

WSGIApplicationGroup %{GLOBAL}

WSGIDaemonProcess gnocchi display-name=gnocchi_wsgi user=gnocchi group=gnocchi processes=6 threads=6

WSGIProcessGroup gnocchi

WSGIScriptAlias / /var/www/cgi-bin/gnocchi/app

\</VirtualHost>

11.8 创建数据库

# mkdir /var/www/cgi-bin/gnocchi

# cp /usr/lib/python2.7/site-packages/gnocchi/rest/gnocchi-api /var/www/cgi-bin/gnocchi/app

# chown -R gnocchi. /var/www/cgi-bin/gnocchi

# su -s /bin/bash gnocchi -c \"gnocchi-upgrade\"

# su -s /bin/bash ceilometer -c \"ceilometer-upgrade --skip-metering-database\"

11.9 启动服务

# systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service

# systemctl restart openstack-gnocchi-api.service openstack-gnocchi-metricd.service

systemctl restart httpd memcached

# systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service

# systemctl restart openstack-ceilometer-notification.service openstack-ceilometer-central.service

11.10 监控组件

# crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2

# crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2

# systemctl restart openstack-glance-api openstack-glance-registry

# crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/cinder/cinder.conf oslo_messaging_notifications driver messagingv2

# systemctl restart openstack-cinder-api openstack-cinder-scheduler

# crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2

# crudini --set /etc/heat/heat.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# systemctl restart openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

# crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2

# systemctl restart neutron-server.service

# crudini --set /etc/swift/proxy-server.conf filter:keystoneauth operator_roles \"admin, user, ResellerAdmin\"

# crudini --set /etc/swift/proxy-server.conf pipeline:main pipeline \"catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server\"

# crudini --set /etc/swift/proxy-server.conf filter:ceilometer paste.filter_factory ceilometermiddleware.swift:filter_factory

# crudini --set /etc/swift/proxy-server.conf filter:ceilometer url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME:5672/

# crudini --set /etc/swift/proxy-server.conf filter:ceilometer driver messagingv2

# crudini --set /etc/swift/proxy-server.conf filter:ceilometer topic notifications

# crudini --set /etc/swift/proxy-server.conf filter:ceilometer log_level WARN

# systemctl restart openstack-swift-proxy.service

11.11 添加变量参数

# echo \"export OS_AUTH_TYPE=password\" >> /etc/keystone/admin-openrc.sh

11.12 安装软件包

# compute 节点

# yum install openstack-ceilometer-compute -y

11.13 配置Ceilometer

# crudini --set /etc/ceilometer/ceilometer.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://\$HOST_NAME:5000

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials memcached_servers = \$HOST_NAME:11211

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name \$DOMAIN_NAME

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials project_name service

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer

# crudini --set /etc/ceilometer/ceilometer.conf service_credentials password \$CEILOMETER_PASS

# crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True

# crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour

# crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state

# crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2

11.14 启动服务

# systemctl enable openstack-ceilometer-compute.service

# systemctl restart openstack-ceilometer-compute.service

# systemctl restart openstack-nova-compute

12 安装Aodh监控服务

12.1通过脚本安装Aodh服务

12.2-12.9 Alarm监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

#Controller节点

执行脚本iaas-install-aodh.sh进行安装

12.2 创建数据库

# mysql -u root -p

mysql> CREATE DATABASE aodh;

mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@\'localhost\' IDENTIFIED BY \'\$AODH_DBPASS\';mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@\'%\' IDENTIFIED BY \'\$AODH_DBPASS\';

12.3 创建keystone用户

# openstack user create --domain \$DOMAIN_NAME --password \$AODH_PASS aodh

# openstack role add --project service --user aodh admin

12.4 创建Endpoint和API

# openstack service create --name aodh --description \"Telemetry Alarming\" alarming

# openstack endpoint create --region RegionOne alarming public http://\$HOST_NAME:8042

# openstack endpoint create --region RegionOne alarming internal http://\$HOST_NAME:8042

# openstack endpoint create --region RegionOne alarming admin http://\$HOST_NAME:8042

12.5 安装软件包

# yum -y install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python2-aodhclient

12.6 配置aodh

# crudini --set /etc/aodh/aodh.conf DEFAULT log_dir /var/log/aodh

# crudini --set /etc/aodh/aodh.conf DEFAULT transport_url rabbit://\$RABBIT_USER:\$RABBIT_PASS@\$HOST_NAME

# crudini --set /etc/aodh/aodh.conf api auth_mode keystone

# crudini --set /etc/aodh/aodh.conf api gnocchi_external_project_owner service

# crudini --set /etc/aodh/aodh.conf database connection mysql+pymysql://aodh:\$AODH_DBPASS@\$HOST_NAME/aodh

# crudini --set /etc/aodh/aodh.conf keystone_authtoken www_authenticate_uri http://\$HOST_NAME:5000

# crudini --set /etc/aodh/aodh.conf keystone_authtoken auth_url http://\$HOST_NAME:5000

# crudini --set /etc/aodh/aodh.conf keystone_authtoken memcached_servers \$HOST_NAME:11211

# crudini --set /etc/aodh/aodh.conf keystone_authtoken auth_type password

# crudini --set /etc/aodh/aodh.conf keystone_authtoken project_domain_name \$DOMAIN_NAME

# crudini --set /etc/aodh/aodh.conf keystone_authtoken user_domain_name \$DOMAIN_NAME

# crudini --set /etc/aodh/aodh.conf keystone_authtoken project_name service

# crudini --set /etc/aodh/aodh.conf keystone_authtoken username aodh

# crudini --set /etc/aodh/aodh.conf keystone_authtoken password \$AODH_PASS

# crudini --set /etc/aodh/aodh.conf service_credentials auth_url http://\$HOST_NAME:5000/v3

# crudini --set /etc/aodh/aodh.conf service_credentials auth_type password

# crudini --set /etc/aodh/aodh.conf service_credentials project_domain_name \$DOMAIN_NAME

# crudini --set /etc/aodh/aodh.conf service_credentials user_domain_name \$DOMAIN_NAME

# crudini --set /etc/aodh/aodh.conf service_credentials project_name service

# crudini --set /etc/aodh/aodh.conf service_credentials username aodh

# crudini --set /etc/aodh/aodh.conf service_credentials password \$AODH_PASS

# crudini --set /etc/aodh/aodh.conf service_credentials interface internalURL

12.7 创建监听端点

修改/etc/httpd/conf.d/20-aodh_wsgi.conf文件,添加以下内容:

Listen 8042

\<VirtualHost *:8042>

DocumentRoot \"/var/www/cgi-bin/aodh\"

\<Directory \"/var/www/cgi-bin/aodh\">

AllowOverride None

Require all granted

\</Directory>

CustomLog \"/var/log/httpd/aodh_wsgi_access.log\" combined

ErrorLog \"/var/log/httpd/aodh_wsgi_error.log\"

SetEnvIf X-Forwarded-Proto https HTTPS=1

WSGIApplicationGroup %{GLOBAL}

WSGIDaemonProcess aodh display-name=aodh_wsgi user=aodh group=aodh processes=6 threads=3

WSGIProcessGroup aodh

WSGIScriptAlias / \"/var/www/cgi-bin/aodh/app\"

\</VirtualHost>

12.8 同步数据库

# mkdir /var/www/cgi-bin/aodh

# cp /usr/lib/python2.7/site-packages/aodh/api/app.wsgi /var/www/cgi-bin/aodh/app

# chown -R aodh. /var/www/cgi-bin/aodh

# su -s /bin/bash aodh -c \"aodh-dbsync\"

12.9 启动服务

# systemctl restart openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener

systemctl enable openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener

# systemctl restart httpd memcached

13 添加控制节点资源到云平台

13.1 修改openrc.sh

控制节点执行:

把compute节点的IP和主机名改为controller节点的IP和主机名

[root@controller \~]# vi /etc/iaas-openstack/openrc.sh

修改:

HOST_IP_NODE=192.168.100.10

HOST_NAME_NODE=controller

13.2 运行iaas-install-nova-compute.sh

在控制节点运行iaas-install-nova-compute.sh

执行过程中需要确认登录controller节点和输入controller节点root用户密码。

成功如下图:

14 常见报错及解决方法

14.1 创建实例报错

解决方法:

14.2 实例创建失败

解决方法:

openstack和nova1主机剩余内存不足导致,重启2台主机,并扩大内存资源后,再次创建云主机成功。

14.3 主机状态是 "错误"


解决方法:

是nova计算节点的libvirtd服务和openstack-nova-compute服务未启动或启动失败,重新启动即可

14.4 登录控制台显示 "Bootiung....."

解决方法1:计算节点执行

crudini --set /etc/nova/nova.conf libvirt virt_type qemu

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl restart libvirtd.service openstack-nova-compute.service

[root@compute \~]# vi /etc/nova/nova.conf

cpu_mode=none

14.4 显示无可用域

解决方法:控制节点执行

[root@controller \~]# nova service-list

[root@controller \~]# nova service-enable 647705ee-5e10-499a-98b4-ccfb7c7fcfb3 # 前面这个ID根据自己的填写

14.5 导入镜像报错

解决方法:

[root@controller images]# source /etc/keystone/admin-openrc.sh

再次执行指令,导入镜像即可

岁月静好
最后更新于 2024-04-20