Openstack Mitaka安装部署教程
一、实验环境:
系统:centos7.2-minimal
网络:管理网络eth0和虚拟机实例网络eth1
controller:192.168.22.202 eth0
192.168.30.202 eth1
Compute01:192.168.22.203 eth0
192.168.30.203 eth1
二、环境配置:
1、所有节点关闭Firewalls、NetworkMananger、selinux、主机名为各自节点名称
2、安装时间同步服务器chrony
#Yum install chrony –y
3、在控制节点上配置:allow 192.168.21.0/22
4、在计算节点上同步控制节点时间:server controller iburst
5、启动服务并开机自动运行:
#systemctl enable chronyd.service
#systemctl start chronyd.service
6、准备阿里源、epel源
#yum install -y centos-release-openstack-mitaka
#yum install https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-6.noarch.rpm -y
#yum install python-openstackclient -y ####安装opentack必须的插件####
#yum install openstack-selinux -y
#yum upgrade
#reboot
7、数据库安装(mariadb) ####controller###
#yum install mariadb mariadb-serverpython2-PyMySQL -y
######数据库配置######
###创建并编辑:/etc/my.cnf.d/openstack.cnf
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
######启动服务######
# systemctl enable mariadb.service
# systemctl start mariadb.service
######初始化数据库######
####注意查看端口是否已经启动:netstat -lnp | grep 3306###
8、rabbitmq安装(rabbitmq使用5672端口) ##controller##
# yum install rabbitmq-server -y ###安装###
# systemctl enable rabbitmq-server.service ###开机启动###
# systemctl start rabbitmq-server.service ###启动服务###
#rabbitmqctl add_user openstack zx123456 ###增加openstack用户,并设置密码为zx123456###
#rabbitmqctl set_permissions openstack ".*" ".*" ".*" ###新增用户设置权限###
9、memcached安装(使用端口11211) ##controller##
# yum install memcached python-memcached -y ###安装###
# systemctl enable memcached.service ###开机启动###
# systemctl start memcached.service ###启动服务###
10、keystone安装 ##controller##
######登录数据库并创建keystone数据库:
#mysql -uroot –pzx123456
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'zx123456';
GRANTALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'zx123456';
###设置授权用户和密码###
生成admin_token的随机值:openssl rand -hex 10
# yum install openstack-keystone httpd mod_wsgi -y ##controller##
配置:vi /etc/keystone/keystone.conf
admin_token=随机值(主要为安全,也可以不用替换)
connection= mysql+pymysql://keystone:zx123456@192.168.22.202/keystone
provider = fernet
#初始化身份认证服务的数据库:
#su -s /bin/sh -c "keystone-manage db_sync" keystone
#初始化Fernet keys:
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
#配置Apache HTTP服务
配置:/etc/httpd/conf/httpd.conf
ServerName controller
用下面的内容创建文件/etc/httpd/conf.d/wsgi-keystone.conf
Listen 35357
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystonegroup=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystonegroup=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
启动Apache HTTP服务:
# systemctl enable httpd.service
# systemctl start httpd.service
#创建服务实体和API端点
配置认证令牌:
#export OS_TOKEN=2e8cd090b7b50499d5f9
配置端点URL:
#OS_URL=http://controller:35357/v3
配置认证API版本:
#export OS_IDENTITY_API_VERSION=3
#创建服务实体和身份认证服务:
#openstack service create --name keystone--description "OpenStack Identity" identity
#创建认证服务的API端点:
#openstack endpoint create --region RegionOne identity public http://controller:5000/v3
#openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
#openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
#创建域、项目、用户、角色
创建域“default”
#openstack domain create --description"Default Domain" default
创建admin项目
#openstack project create --domain default--description "Admin Project" admin
创建admin用户
#openstack user create --domain default --password-prompt admin
##提示输入admin用户密码##
创建admin角色
添加``admin``角色到admin项目和用户上
openstack role add --project admin --user adminadmin
创建``service``项目
openstack project create --domain default --description "Service Project" service
创建``demo``项目
openstack project create --domain default --description "Demo Project" demo
创建``demo``用户
openstack user create --domain default --password-prompt demo
##提示输入demo用户密码##
创建user角色
openstack role create user
添加”user”角色到``demo “项目和用户
openstack role add --project demo --user demo user
验证:
关闭临时认证令牌机制:
编辑/etc/keystone/keystone-paste.ini文件,从``[pipeline:public_api]``,[pipeline:admin_api]``和``[pipeline:api_v3]``部分删除``admin_token_auth
重置``OS_TOKEN``和``OS_URL``环境变量
使用admin用户来,检查测试,看能否获取令牌:
新建admin项目和demo项目的环境变量
admin项目:添加如下内容
vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=zx123456
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
demo项目:
vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=zx123456
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
加载环境变量并获取令牌:
#source admin-openrc
#openstack token issue
三、glance安装和配置
控制节点安装glance
1、登录MySQL,建库和建用户
mysql -uroot –pzx123456
CREATE DATABASE glance; ##创建glance数据库##
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'zx123456';
GRANT ALL PRIVILEGES ON glance.* TO'glance'@'%' IDENTIFIED BY 'zx123456';
2、建keystone论证连接,使用的用户,密码,角色权限
source admin-openrc
创建glance用户
openstack user create --domain default --password-prompt glance
##提示输入glance密码##
添加admin角色到glance用户和service项目上
openstack role add --project service --user glance admin
3、创建``glance``服务实体
openstack service create --name glance --description "OpenStack Image" image
4、创建镜像服务的API端点
openstack endpoint create --region RegionOne image publichttp://controller:9292
openstack endpoint create --region RegionOne image internalhttp://controller:9292
openstack endpoint create --region RegionOneimage admin http://controller:9292
5、安装glance包 #controller#
yum install openstack-glance -y
6、glance-api配置
vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:zx123456@controller/glance
[keystone_authtoken]
auth_url =http://controller:5000
auth_url= http://controller:35357
memcached_servers= controller:11211
auth_type= password
project_domain_name= default
user_domain_name= default
project_name= service
username= glance
password
= zx123456
[paste_deploy]
flavor = keystone***#指定论证机制***
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir= /var/lib/glance/images/
7、配置/etc/glance/glance-registry.conf
vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:zx123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = zx123456
[paste_deploy]
flavor = keystone
8、新建保存镜象目录,并更改属主
chown glance. /var/lib/glance/images/
9、生成数据库结构
su -s /bin/sh -c "glance-managedb_sync" glance
10、设置开机启动和运行
#systemctl enable openstack-glance-api.service openstack-glance-registry.service
#systemctl start openstack-glance-api.service openstack-glance-registry.service
查看服务end point信息
验证操作
#source admin-openrc
#wgethttp://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
##下载镜像##
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img--disk-format qcow2 --container-format bare --public
##上传镜像##
openstack image list ##查看结果##
四、nova服务安装与配置
控制节点
1、建数据库,连库使用的用户名和密码
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'zx123456';
GRANT ALL PRIVILEGES ONnova_api.* TO 'nova'@'%' IDENTIFIED BY 'zx123456';
GRANTALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'zx123456';
GRANT ALL PRIVILEGES ONnova.* TO 'nova'@'%' \IDENTIFIED BY 'zx123456';
2、检查执行结果
select user,host from mysql.user where user="nova";
3、建服务实体,keystone用户,角色关联
建nova服务实体
openstack service create --name nova --description "OpenStack Compute" compute
建用户
openstack user create --domain default --password-prompt nova
##提示输入NOVA密码##
用户,角色,项目关联
openstack role add --project service --user nova admin
建keystone-api对外的端点
openstack endpoint create --region RegionOne compute publichttp://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internalhttp://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
4、查看结果
5、安装nova软件包
yum installopenstack-nova-api openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler -y
6、修改nova配置文件
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis= osapi_compute,metadata
rpc_backend= rabbit
auth_strategy= keystone
my_ip= 192.168.22.202
use_neutron= True
firewall_driver= nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:zx123456@controller/nova_api
[database]
#nova连数据库.
connection = mysql+pymysql://nova:zx123456@controller/nova
rabbit_userid = openstack
rabbit_password = zx123456
[keystone_authtoken]
#keystone论证连接设置
auth_url=http://controller:5000
auth_url= http://controller:35357
memcached_servers= controller:11211
auth_type= password
project_domain_name= default
user_domain_name= default
project_name= service
username= nova
password= zx123456
[glance]
api_servers= http://controller:9292
[vnc]
vncserver_listen= 192.168.22.202
vncserver_proxyclient_address= 192.168.22.202
[oslo_concurrency]
#设置锁文件地址
lock_path= /var/lib/nova/tmp
7、同步数据库
su -s /bin/sh -c"nova-manage api_db sync" nova su-s /bin/sh -c "nova-manage db sync" nova
##警告信息可以忽略##
8、验证
mysql –uroot –pzx123456
use nova;
show tables;
9、启动服务并开机自启
#systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl startopenstack-nova-api.service \
openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点
1、nova-compute服务安装
yum installopenstack-nova-compute –y
2、修改配置文件
vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
#计算节点ip
my_ip = 192.168.22.203
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password= zx123456
[vnc]
enabled = True
vncserver_listen= 0.0.0.0
#计算节点管理网络ip
vncserver_proxyclient_address= 192.168.22.203
novncproxy_base_url= http://192.168.22.202:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
#锁文件
lock_path = /var/lib/nova/tmp
egrep-c '(vmx|svm)' /proc/cpuinfo
##确定您的计算节点是否支持虚拟机的硬件加速##
返回0,则需要配置下面:
[libvirt]
virt_type = qemu
3、启动服务
#systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl startlibvirtd.service openstack-nova-compute.service
验证操作
在controller执行下面命令:
#source /root/admin-openrc
#openstack compute servicelist
五、Neutron安装与配置
控制节点
1、创建neutron数据库并赋予权限
mysql –uroot –pzx123456
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'zx123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'zx123456';
2、获得admin凭证及创建neutron用户
#source admin-openrc
#openstack user create --domain default --password-prompt neutron
##提示输入neutron密码##
3、添加“admin“角色到“neutron“用户
openstack role add --project service --user neutron admin
4、创建“neutron“服务实体
openstack service create --name neutron --description "OpenStack Networking" network
5、创建网络服务API端点
openstack endpoint create --region RegionOnenetwork public http://controller:9696
openstack endpoint create --region RegionOnenetwork internal http://controller:9696
openstack endpoint create --region RegionOne
network adminhttp://controller:9696
6、网络选择:Self-service network
neutron相关包安装:
yum install openstack-neutronopenstack-neutron-ml2 openstack-neutron-linuxbridge ebtables –y
7、neutron服务配置文件
mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[database]
connection = mysql+pymysql://neutron:zx123456@controller/neutron #改为自己数据库密码
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = zx123456 #改为rabbitmq的密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = zx123456 #改为自己neutron服务的密码
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = zx123456 #改为自己nova服务的密码
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
ML2插件的配置:
mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
vim /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = *
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True
linuxbridge agent配置文件
mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:eht1 #这里设置为provider网络的网卡名称,我这里eth1
[vxlan]
enable_vxlan = True
local_ip = 192.168.22.202 #这个ip地址我们使用的是管理网段的ip (192.168.22.202)
l2_population = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
L3 agent配置文件:
mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =#留空
dhcp agent配置
mv /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
vim /etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置metadata agent
mv /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
vim /etc/neutron/metadata_agent.ini
nova_metadata_ip = controller
metadata_proxy_shared_secret = zx123456#修改为自己的METADATA_SECRET,也可以不修改,要和nova服务配置一样
配置nova服务使用network
vim /etc/nova/nova.conf#增加以下内容
[neutron]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = zx123456#改为自己neutron服务密码
service_metadata_proxy= True
metadata_proxy_shared_secret= zx123456 #和上面的METADATA对应
8、给ML2插件做个软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
9、同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
10、重启nova-api
systemctl restart openstack-nova-api.service
11、启动neutron相关服务,并设置开机启动
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
neutron-l3-agent.service
# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.serviceneutron-l3-agent.service
compute节点配置
1、安装neutron服务
yum install openstack-neutron-linuxbridge ebtables ipset
2、配置
neutron服务配置
mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = zx123456 #改为rabbit密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = zx123456 #改为自己neutron服务密码
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
linuxbridge agent配置
mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1 #改为provider网络的网卡,这里是eth1
[vxlan]
enable_vxlan = True
local_ip = 192.168.22.203#改为本机managent网络的ip地址192.168.22.203
l2_population = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
nova服务使用network
vim /etc/nova/nova.conf #增加以下内容
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = zx123456 #改为自己的neutron服务密码
3、重启nova服务
systemctl restart openstack-nova-compute.service
4、启动neutron
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
验证
在controller节点上执行:
source /root/admin-openrc
neutron ext-list
neutron agent-list
Neutron服务安装完成。
六、Dashboard安装配置
控制节点
1、安装面板
yum installopenstack-dashboard –y
2、配置相应设置
vim /etc/openstack-dashboard/local_settings
修改如下配置:
OPENSTACK_HOST ="controller"
ALLOWED_HOSTS = ['*', ]
CACHES = {
'default': {
'BACKEND':'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': '192.168.22.202:11211',
},
}
OPENSTACK_KEYSTONE_URL ="http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT= True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN= "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE= "user"
TIME_ZONE = "UTC"
3、重启apache和memcaced服务
systemctl enablehttpd.service memcached.service
systemctl restarthttpd.service memcached.service
systemctl statushttpd.service memcached.service
验证