十五、Neutron控制节点集群部署
https://docs.openstack.org/neutron/train/install/install-rdo.html
Neutron网络的博客
Nova具体功能如下:
- Neutron 为整个 OpenStack 环境提供网络支持,包括二层交换,三层路由,负载均衡,防火墙和 VPN 等。
- Neutron 提供了一个灵活的框架,通过配置,无论是开源还是商业软件都可以被用来实现这些功能。
1. 创建nova相关数据库(控制节点)
在任意控制节点创建数据库,数据库自动同步,以controller01节点为例;
mysql -u root -pZx******
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'Zx******';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'Zx******';
flush privileges;
2. 创建neutron相关服务凭证(控制节点)
在任意控制节点操作,以controller01节点为例;
2.1 创建neutron用户
source admin-openrc
openstack user create --domain default --password Zx****** neutron
2.2 向neutron用户赋予admin权限
openstack role add --project service --user neutron admin
2.3 创建neutron服务实体
openstack service create --name neutron --description "OpenStack Networking" network
2.4 创建neutron API服务端点
api地址统一采用vip,如果public/internal/admin
分别设计使用不同的vip,请注意区分;
--region
与初始化admin用户时生成的region
一致;neutron-api 服务类型为network;
openstack endpoint create --region RegionOne network public http://10.15.253.88:9696
openstack endpoint create --region RegionOne network internal http://10.15.253.88:9696
openstack endpoint create --region RegionOne network admin http://10.15.253.88:9696
3. 安装Neutron server(控制节点)
- openstack-neutron:neutron-server的包
- openstack-neutron-ml2:ML2 plugin的包
- openstack-neutron-linuxbridge:linux bridge network provider相关的包
- ebtables:防火墙相关的包
- conntrack-tools: 该模块可以对iptables进行状态数据包检查
这里将neutron server与neutron agent分离,所以采取这样的部署方式,常规的控制节点部署所有neutron的应用包括agent,计算节点部署只部署以下的neutron server、linuxbridge和nova配置即可;三台计算节点现在相当于neutron节点
在全部控制节点安装neutron相关服务,以controller01节点为例;
#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
yum install openstack-neutron openstack-neutron-ml2 ebtables -y
yum install conntrack-tools -y
4. 部署与配置(控制节点)
https://docs.openstack.org/neutron/train/install/controller-install-rdo.html
在全部控制节点配置neutron相关服务,以controller01节点为例;
4. 1 配置neutron.conf
注意my_ip
参数,根据节点修改;注意neutron.conf
文件的权限:root:neutron
注意bind_host
参数,根据节点修改;
#备份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 10.15.253.163
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
#直接连接rabbitmq集群
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:Zx******@controller01:5672,openstack:Zx******@controller02:5672,openstack:Zx******@controller03:5672
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
#启用l3 ha功能
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
#最多在几个l3 agent上创建ha router
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
#可创建ha router的最少正常运行的l3 agnet数量
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
#dhcp高可用,在3个网络节点各生成1个dhcp服务器
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:Zx******@10.15.253.88/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password Zx******
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://10.15.253.88:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password Zx*****
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
将neutron.conf配置文件拷贝到另外的控制节点上:
scp -rp /etc/neutron/neutron.conf controller02:/etc/neutron/
scp -rp /etc/neutron/neutron.conf controller03:/etc/neutron/
##controller02上
sed -i "s#10.15.253.163#10.15.253.195#g" /etc/neutron/neutron.conf
##controller03上
sed -i "s#10.15.253.163#10.15.253.227#g" /etc/neutron/neutron.conf
4.2 配置 ml2_conf.ini
在全部控制节点操作,以controller01节点为例;
#备份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
将ml2_conf.ini
配置文件拷贝到另外的控制节点上:
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini controller03:/etc/neutron/plugins/ml2/ml2_conf.ini
4.3 配置nova服务与neutron服务进行交互
全部控制节点执行;
#修改配置文件/etc/nova/nova.conf
#在全部控制节点上配置nova服务与网络节点服务进行交互
openstack-config --set /etc/nova/nova.conf neutron url http://10.15.253.88:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.15.253.88:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password Zx******
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret Zx******
4.4 同步nova相关数据库并验证
任意控制节点操作;填充neutron数据库
[root@controller01 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
...
OK
验证neutron数据库是否正常写入
mysql -h controller03 -u neutron -pZx****** -e "use neutron;show tables;"
4.5 创建ml2的软连接 文件指向ML2插件配置的软链接
全部控制节点执行;
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
4.6 重启nova-api和neutron-server服务
在全部控制节点操作;
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
systemctl enable neutron-server.service
systemctl restart neutron-server.service
systemctl status neutron-server.service
十六、Neutron计算节点集群部署
1. 安装Neutron agent(计算节点=网络节点)
由于这里部署为
neutron server
与neutron agent
分离,所以采取这样的部署方式,常规的控制节点部署所有neutron
的应用包括server和agent
;计算节点部署
neutron agent
、linuxbridge
和nova
配置即可;也可以单独准备网络节点进行neutron agent
的部署;
在全部计算节点安装,以compute01节点为例;
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
#备份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 10.15.253.162
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:Zx******@controller01:5672,openstack:Zx******@controller02:5672,openstack:Zx******@controller03:5672
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
#配置RPC的超时时间,默认为60s,可能导致超时异常.设置为180s
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password Zx******
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
将neutron.conf配置文件拷贝到另外的计算节点上:
scp -rp /etc/neutron/neutron.conf controller02:/etc/neutron/
scp -rp /etc/neutron/neutron.conf controller03:/etc/neutron/
##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/neutron/neutron.conf
##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/neutron/neutron.conf
2. 部署与配置(计算节点)
2.1 配置nova.conf
在全部计算节点操作;配置只涉及nova.conf的[neutron]
字段
openstack-config --set /etc/nova/nova.conf neutron url http://10.15.253.88:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.15.253.88:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password Zx******
2.2 配置ml2_conf.ini
在全部计算节点操作,以compute01节点为例;
#备份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
将ml2_conf.ini
配置文件拷贝到另外的计算节点上:
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini compute02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini compute03:/etc/neutron/plugins/ml2/ml2_conf.ini
2.3 配置linuxbridge_agent.ini
- Linux网桥代理
- Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组
- 网络类型名称与物理网卡对应,这里
提供商网络provider
对应规划的ens192网卡
,vlan租户网络对应规划的ens224网卡
,在创建相应网络时采用的是网络名称而非网卡名称; - 需要明确的是物理网卡是本地有效,根据主机实际使用的网卡名确定;
在全部计算节点操作,以compute01节点为例;
#备份配置文件
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
#环境无法提供四张网卡;建议生产环境上将每种网络分开配置
#provider网络对应规划的ens192
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens192
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
#tunnel租户网络(vxlan)vtep端点,这里对应规划的ens192地址,根据节点做相应修改
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.15.253.162
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
将 linuxbridge_agent.ini 配置文件拷贝到另外的计算节点上:
scp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller02:/etc/neutron/plugins/ml2/
scp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller03:/etc/neutron/plugins/ml2/
##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
2.4 配置 l3_agent.ini
- l3代理为租户虚拟网络提供路由和NAT服务
在全部计算节点操作,以compute01节点为例;
#备份配置文件
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
2.5 配置dhcp_agent.ini
- DHCP代理,DHCP代理为虚拟网络提供DHCP服务;
- 使用dnsmasp提供dhcp服务;
在全部计算节点操作,以compute01节点为例;
#备份配置文件
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
2.6 配置metadata_agent.ini
- 元数据代理提供配置信息,例如实例的凭据
-
metadata_proxy_shared_secret
的密码与控制节点上/etc/nova/nova.conf
文件中密码一致;
在全部计算节点操作,以compute01节点为例;
#备份配置文件
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host 10.15.253.88
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret Zx******
openstack-config --set /etc/neutron/metadata_agent.ini cache memcache_servers controller01:11211,controller02:11211,controller03:11211
2.7 添加linux内核参数设置
- 确保Linux操作系统内核支持网桥过滤器,通过验证所有下列sysctl值设置为1;
全部控制节点和计算节点配置;
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables=1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >>/etc/sysctl.conf
#启用网络桥接器支持,需要加载 br_netfilter 内核模块;否则会提示没有目录
modprobe br_netfilter
sysctl -p
2.8 重启nova-api和neutron-gaent服务
全部计算节点;重启nova-compute服务
systemctl restart openstack-nova-compute.service
全部计算节点;启动neutron-agent服务和l3网络服务
systemctl enable neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl restart neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl status neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
3. neutron服务验证(控制节点)
#列出已加载的扩展,以验证该neutron-server过程是否成功启动
[root@controller01 ~]# openstack extension list --network
#列出代理商以验证成功
[root@controller01 ~]# openstack network agent list
4. 添加pcs资源
- 只需要添加
neutron-server
,其他的neutron-agent服务:neutron-linuxbridge-agent
,neutron-l3-agent
,neutron-dhcp-agent
与neutron-metadata-agent
不需要添加了;因为部署在了计算节点上
在任意控制节点操作;添加资源neutron-server
#pcs resource create neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent clone interleave=true
#pcs resource create neutron-l3-agent systemd:neutron-l3-agent clone interleave=true
#pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent clone interleave=true
#pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent clone interleave=true
pcs resource create neutron-server systemd:neutron-server clone interleave=true
查看资源
[root@controller01 ~]# pcs resource