使用OpenStack计算服务来托管和管理云计算系统。OpenStack计算服务是基础设施即服务(IaaS)系统的主要部分,模块主要由Python实现。
OpenStack计算组件请求OpenStack Identity服务进行认证;请求OpenStack Image服务提供磁盘镜像;为OpenStack dashboard提供用户与管理员接口。磁盘镜像访问限制在项目与用户上;配额以每个项目进行设定(例如,每个项目下可以创建多少实例)。OpenStack组件可以在标准硬件上水平大规模扩展,并且下载磁盘镜像启动虚拟机实例。
一、配置控制节点计算服务
1、 配置数据库,添加用户和相关权限
$ mysql -u root –p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' \
IDENTIFIED BY 'NOVA_DBPASS';
2、创建服务实体和API端点
$ . admin-openrc
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
$ openstack role add --project service --user nova admin
$ openstack service create --name nova \
--description "OpenStack Compute" compute
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
$ openstack role add --project service --user placement admin
$ openstack service create --name placement --description "Placement API" placement
$ openstack endpoint create --region RegionOne placement public http://CONTROLLER_IP:8778
$ openstack endpoint create --region RegionOne placement internal http:// CONTROLLER_IP:8778
$ openstack endpoint create --region RegionOne placement admin http:// CONTROLLER_IP:8778
Placement是Ocata新添加的组件,在这里官方给出的文档是直接添加controller,但是如果按照官方文档,后续步骤是会有一些问题,因此在这里最好就是直接使用controller的IP。
Cell也是Ocata新添加的组件,官方的介绍是这个
3、安装并配置组件
# yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
修改nova配置文件
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.141.128.11 这里使用控制节点的IP地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
修改placement配置文件
# vim /etc/httpd/conf.d/00-nova-placement-api.conf
在结尾添加
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重启http服务并初始化相关服务
# systemctl restart httpd
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
# su -s /bin/sh -c "nova-manage db sync" nova
# nova-manage cell_v2 list_cells
4、启动服务并加入开机自启
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
(2)计算节点
1、安装服务
# yum install openstack-nova-compute
2、配置服务
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS 这里使用计算节点的IP地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html 这里使用controller不会报错,但是后续使用dashboard的console界面时无法使用vnc,因此这里建议还是使用IP
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
3、检查是否支持硬件加速(0为不支持,通常虚拟机是不支持的,物理机是支持的)
$ egrep -c '(vmx|svm)' /proc/cpuinfo
如果结果为0,修改/etc/nova/nova.conf
[libvirt]
virt_type = qemu
4、启动服务
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
(3)添加计算节点到cell数据库
Cell是Ocata新添加的组件之一,按照官方介绍,该组件目前是过渡期产品,仅支持单节点,在P版本或更后的版本中会推出分布式cell组件
$ . admin-openrc
$ openstack hypervisor list
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
(4)验证
$ . admin-openrc
$ openstack compute service list 这里需要看到如下情况即可。
Id Binary Host Zone Status State Updated At
1 nova-consoleauth controller internal enabled up | 2016-02-09T23:11:15.000000 |
2 nova-scheduler controller internal enabled up | 2016-02-09T23:11:15.000000 |
3 nova-conductor controller internal enabled up | 2016-02-09T23:11:16.000000 |
4 nova-compute compute1 nova enabled up | 2016-02-09T23:11:20.000000 |
多compute节点则更多,如果这里有不同的话,建议查看时间同步,时间不同会对这里造成影响。
$ openstack catalog list
$ openstack image list
# nova-status upgrade check
这里需要注意的是,按照官方文档执行nova-status upgrade check是会报错的,原因其实很简单,这里需要重启http服务,官方是没有写的,systemctl restart httpd