2020-08-28 架构师第8周作业

▲单独架构的小伙伴看这里:(学习杰哥视频的作业第15-16天)

1、使用docker安装部署jumpserver跳板机

一、实验环境

1.1 服务器IP

(1)Mariadb+redis:192.168.253.20

(2)Jumpserver :192.168.253.30

1.2 软件环境

(1)jumpserver/jms_all:v2.2.0

(2)docker-ce

二、配置Mariadb+Redis

2.1 安装配置Mariadb

[253.20]# yum install mariadb-server -y

[253.20]# mysql

mysql>create database jumpserver default charset 'utf8' collate 'utf8_bin';

mysql>grant all on jumpserver.* to 'jumpserver'@'%' identified by 'abc.123';

2.2 安装配置Redis(需要epel源)

[253.20]# yum install redis -y

[253.20]# vim /etc/redis.conf

bind 0.0.0.0

requirepass abc.123

三、部署Docker(使用yum安装)

3.1 安装必要的一些系统工具

[253.30]# yum install -y yum-utils device-mapper-persistent-data lvm2

3.2  添加软件源信息

[253.30]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.3 更新并安装Docker-CE

[253.30]#  yum makecache fastsudo yum -y install docker-ce

3.4 开启Docker服务

[253.30]#  service docker start

3.5 验证docker

[253.30]# docker version

3.6 修改镜像下载配置

[253.30]# vim /etc/docker/daemon.json

{

    "registry-mirrors" : ["https://9916w1ow.mirror.aliyuncs.com" ]

}

四、部署Jumperserver

4.1 生成随机加密秘钥和初始化 token

[253.30]#  if [ "$SECRET_KEY" = "" ]; then SECRET_KEY=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 50`; echo "SECRET_KEY=$SECRET_KEY" >> ~/.bashrc; echo $SECRET_KEY; else echo $SECRET_KEY; fi

XHK7dhQ4nmEPqE0rqWupOm2FtYIEeD1DmU9EiNyfl3pypEtgjK

[253.30]# if [ "$BOOTSTRAP_TOKEN" = "" ]; then BOOTSTRAP_TOKEN=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 16`; echo "BOOTSTRAP_TOKEN=$BOOTSTRAP_TOKEN" >>~/.bashrc; echo $BOOTSTRAP_TOKEN; else echo $BOOTSTRAP_TOKEN; fi

z7QorguuusCTGx0H

4.2 导入镜像

[253.30]# docker pull jumpserver/jms_all:v2.2.0

4.3 建立容器

[253.30]# docker run --name linux-n44 \

-v /opt/jumpserver:/opt/jumpserver/data/media \

-p 80:80 \

-p 2222:2222 \

-e SECRET_KEY=XHK7dhQ4nmEPqE0rqWupOm2FtYIEeD1DmU9EiNyfl3pypEtgjK \

-e BOOTSTRAP_TOKEN=z7QorguuusCTGx0H \

-e DB_HOST=192.168.253.20 \

-e DB_PORT=3306 \

-e DB_USER='jumpserver' \

-e DB_PASSWORD="abc.123" \

-e DB_NAME=jumpserver \

-e REDIS_HOST=192.168.253.20 \

-e REDIS_PORT=6379 \

-e REDIS_PASSWORD="abc.123" \

jumpserver/jms_all:v2.2.0

4.4 查看容器启动状态

[253.30]# docker logs -f  linux-n44

五、登录 web

http://192.168.253.30     (默认账户 admin,密码 admin)





2、openstack rocky版keystone,glance,nova,neutron,cirros等安装配置。

一、环境

系统版本为Centos7.5

内存均分配:4G

存储:40G

CUP:2核(根据电脑实际情况分配)

准备两台电脑或虚拟机(配置两张网卡)

192.168.253.188

192.168.253.190

二、准备

###关闭防火墙

systemctl restart network

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

sed -i 's/=enforcing/=disabled/' /etc/selinux/config

###更新软件包

yum upgrade -y

###重启

reboot

###设置主机名

hostnamectl set-hostname controller

hostnamectl set-hostname compute

###添加主机映射

cat << EOF >> /etc/hosts

192.168.253.188 controller

192.168.253.190 compute

EOF

设置阿里云镜像源

1.备份

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

2. 下载阿里云的CentOS-Base.repo 到/etc/yum.repos.d/

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

3. 清理缓存

sudo yum clean all

4. 生成新的缓存

sudo yum makecache

5. 添加Openstack yum源:

cat << EOF >> /etc/yum.repos.d/openstack.repo

[openstack-rocky]

name=openstack-rocky

baseurl=https://mirrors.aliyun.com/centos/7/cloud/x86_64/openstack-rocky/

enabled=1

gpgcheck=0

[qume-kvm]

name=qemu-kvm

baseurl= https://mirrors.aliyun.com/centos/7/virt/x86_64/kvm-common/

enabled=1

gpgcheck=0

EOF

###配置时间同步

1. controller节点

yum install -y chrony

编辑/etc/chrony.conf文件,修改配置为如下:

server controller iburst

allow 192.168.0.0/16

启动服务

systemctl start chronyd

systemctl enable chronyd

2. compute节点

yum install -y chrony

编辑/etc/chrony.conf文件,修改配置为如下:

server controller iburst

启动服务

systemctl start chronyd

systemctl enable chronyd

三、安装

###安装OpenStack客户端和selinux服务

yum install -y python-openstackclient openstack-selinux

###安装数据库服务

1、在controller节点安装数据库

[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL

2、修改数据库配置文件

vim /etc/my.cnf.d/openstack.cnf

3、新建数据库配置文件/etc/my.cnf.d/openstack.cnf,添加以下内容:

[mysqld]

bind-address = 192.168.253.188

default-storage-engine = innodb

innodb_file_per_table = on

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

###启动数据库服务

[root@controller ~]# systemctl enable mariadb.service

[root@controller ~]# systemctl start mariadb.service

###设置数据库密码

[root@controller ~]# mysql_secure_installation

###安装消息队列服务

在controller节点安装rabbitmq-server

[root@controller ~]# yum install -y rabbitmq-server -y

###启动消息队列服务

[root@controller ~]# systemctl start rabbitmq-server.service

[root@controller ~]# systemctl enable rabbitmq-server.service

###添加openstack用户

[root@controller ~]# rabbitmqctl add_user openstack 123456

Creating user "openstack" ...

###设置openstack用户最高权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

###安装memcached 服务

在controller节点上安装memcached

[root@controller ~]# yum install -y memcached

修改memcached配置文件

编辑/etc/sysconfig/memcached,修改以下内容

修改OPTIONS="-l 127.0.0.1,::1"为

OPTIONS="-l 127.0.0.1,::1,controller"

###启动memcached服务

[root@controller ~]# systemctl start memcached.service

[root@controller ~]# systemctl enable memcached.service

###安装etcd服务

在controller节点上安装etcd服务

[root@controller ~]# yum install etcd -y

修改etcd配置文件,使其他节点能够访问

[root@controller ~]# vim /etc/etcd/etcd.conf

编辑/etc/etcd/etcd.conf,在各自的位置修改以下内容

#[Member]

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.253.188:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.253.188:2379"

ETCD_NAME="controller"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.253.188:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.253.188:2379"

ETCD_INITIAL_CLUSTER="controller=http://192.168.253.188:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"

ETCD_INITIAL_CLUSTER_STATE="new"

###启动etcd服务

[root@controller ~]# systemctl start etcd

[root@controller ~]# systemctl enable etcd

四、安装keystone服务

###创建数据库

[root@controller ~]# mysql -uroot -p123456

MariaDB [(none)]> CREATE DATABASE keystone;

Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.00 sec)

###安装软件包

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y

编辑配置文件/etc/keystone/keystone.conf

vim /etc/keystone/keystone.conf

[database]

connection = mysql+pymysql://keystone:123456@controller/keystone

[token]

provider = fernet

###同步数据库

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

###初始化fernet key库

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

###引导身份认证

[root@controller ~]# keystone-manage bootstrap --bootstrap-password 123456 \

--bootstrap-admin-url http://controller:5000/v3/ \

--bootstrap-internal-url http://controller:5000/v3/ \

--bootstrap-public-url http://controller:5000/v3/ \

--bootstrap-region-id RegionOne

###编辑httpd配置文件

vim /etc/httpd/conf/httpd.conf

ServerName controller

###创建文件链接

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

###启动httpd服务

[root@controller ~]# systemctl start httpd

[root@controller ~]# systemctl enable httpd

###编写环境变量脚本admin-openrc.sh

export OS_USERNAME=admin

export OS_PASSWORD=123456

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

###创建service项目

[root@controller ~]# openstack project create --domain default --description "Service Project" service

###验证

[root@controller ~]# openstack user list 

+----------------------------------+-------+

| ID                              | Name  |

+----------------------------------+-------+

| 5238d646322346be9e3f9750422bcf4d | admin |

+----------------------------------+-------+

###查看

[root@controller ~]# openstack token issue

五、glance安装

###创建数据库

[root@controller ~]# mysql -uroot -p123456

MariaDB [(none)]> CREATE DATABASE glance;

Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.00 sec)

###创建用户、服务等

[root@controller ~]# source admin-openrc

[root@controller ~]# openstack user create --domain default --password-prompt glance

User Password:123456

Repeat User Password:123456

[root@controller ~]# openstack role add --project service --user glance admin

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

###安装软件包

[root@controller ~]# yum install -y openstack-glance

编辑配置文件/etc/glance/glance-api.conf

vim /etc/glance/glance-api.conf

[database]

connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]

www_authenticate_uri  = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = 123456

[paste_deploy]

flavor = keystone

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

编辑配置文件/etc/glance/glance-registry.conf

vim /etc/glance/glance-registry.conf

[database]

connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = 123456

[paste_deploy]

flavor = keystone

###同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

###启动服务

[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service

###验证

[root@controller ~]#. admin-openrc

[root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

[root@controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public

六、安装nova服务

###controller节点

1、创建数据库 

[root@controller ~]# mysql -u root -p123456

MariaDB [(none)]> CREATE DATABASE nova_api;

MariaDB [(none)]> CREATE DATABASE nova;

MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> CREATE DATABASE placement;

Grant proper access to the databases:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';

2、创建相关用户、服务

[root@controller ~]# openstack user create --domain default --password-prompt nova

[root@controller ~]# openstack role add --project service --user nova admin

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

[root@controller ~]# openstack user create --domain default --password-prompt placement

[root@controller ~]# openstack role add --project service --user placement admin

[root@controller ~]#  openstack service create --name placement --description "Placement API" placement

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778

[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

3、安装软件包

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler openstack-nova-placement-api -y

编辑配置文件/etc/nova/nova.conf

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis = osapi_compute,metadata

[api_database]

connection = mysql+pymysql://nova:123456@controller/nova_api

[database]

connection = mysql+pymysql://nova:123456@controller/nova

[placement_database]

connection = mysql+pymysql://placement:123456@controller/placement

[DEFAULT]

transport_url = rabbit://openstack:123456@controller

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = 123456

[DEFAULT]

my_ip = 192.168.253.188

[DEFAULT]

use_neutron = true

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

enabled = true

server_listen = $my_ip

server_proxyclient_address = $my_ip

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement

password = 123456

编辑/etc/httpd/conf.d/00-nova-placement-api.conf

vim /etc/httpd/conf.d/00-nova-placement-api.conf

添加以下内容

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

4、重启httpd服务

[root@controller ~]# systemctl restart httpd

5、同步nova_api数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

6、注册cell0数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

7、创建cell1单元

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

54e6c270-7390-4390-8702-02b72874c5a7

8、同步nova数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

9、验证cell0和cell1注册成功

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

10、启动服务

[root@controller ~]# systemctl start openstack-nova-api.service \

openstack-nova-scheduler.service openstack-nova-conductor.service \

openstack-nova-novncproxy.service openstack-nova-conductor

[root@controller ~]# systemctl enable openstack-nova-api.service \

openstack-nova-scheduler.service openstack-nova-conductor.service \

openstack-nova-novncproxy.service openstack-nova-conductor

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.

PS:官网没有启动nova-conductor服务,这个服务是交互数据库的,如果不启动这个服务,虚拟机创建不成功


###compute节点

###安装软件包

[root@compute ~]# yum install openstack-nova-compute -y

编辑配置文件/etc/nova/nova.conf

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis = osapi_compute,metadata

[DEFAULT]

transport_url = rabbit://openstack:123456@controller

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = 123456

[DEFAULT]

my_ip = 192.168.253.190

[DEFAULT]

use_neutron = true

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

enabled = true

server_listen = 0.0.0.0

server_proxyclient_address = $my_ip

novncproxy_base_url = http:// 192.168.253.188:6080/vnc_auto.html

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement

password = 123456

###检查是否支持虚拟化

# egrep -c '(vmx|svm)' /proc/cpuinfo

如果等于0,则要在/etc/nova/nova.conf的[libvirt]下添加以下参数

[libvirt]

virt_type = qemu

###启动服务

[root@compute ~] # systemctl start libvirtd.service openstack-nova-compute.service

[root@compute ~] # systemctl enable libvirtd.service openstack-nova-compute.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

###controller节点

确认数据库中有计算节点

# . admin-openrc

[root@controller ~]# openstack compute service list --service nova-compute

###发现计算节点

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

***如果想要自动发现新compute节点,可以在/etc/nova/nova.conf的[scheduler]下添加以下参数

[scheduler]

discover_hosts_in_cells_interval = 300

七、安装neutron服务

controller节点

###创建数据库

[root@controller ~]# mysql -uroot -p123456

MariaDB [(none)] CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';

###创建用户、服务

[root@controller ~]# openstack user create --domain default --password-prompt neutron

[root@controller ~]# openstack role add --project service --user neutron admin

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696

[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696

###配置provider network网络

安装软件包

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

编辑/etc/neutron/neutron.conf配置文件

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

[DEFAULT]

core_plugin = ml2

service_plugins =

[DEFAULT]

transport_url = rabbit://openstack:123456@controller

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 123456

[DEFAULT]

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[nova]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan

[ml2]

tenant_network_types =

[ml2]

mechanism_drivers = linuxbridge

[ml2]

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[securitygroup]

enable_ipset = true

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini配置文件

[linux_bridge]

physical_interface_mappings = provider:eth1

[vxlan]

enable_vxlan = false

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

编辑配置文件/etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

###配置Self-service网络

安装软件包

# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置/etc/neutron/neutron.conf文件

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

[DEFAULT]

transport_url = rabbit://openstack:123456@controller

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 123456

[DEFAULT]

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[nova]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2]

type_drivers = flat,vlan,vxlan

[ml2]

tenant_network_types = vxlan

[ml2]

mechanism_drivers = linuxbridge,l2population

[ml2]

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

[linux_bridge]

physical_interface_mappings = provider:eth1

[vxlan]

enable_vxlan = true

local_ip = 192.168.25300.10

l2_population = true

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

编辑/etc/neutron/l3_agent.ini文件

[DEFAULT]

interface_driver = linuxbridge

编辑/etc/neutron/dhcp_agent.ini文件

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

编辑/etc/neutron/metadata_agent.ini文件

[DEFAULT]

nova_metadata_host = controller

metadata_proxy_shared_secret = METADATA_SECRET

编辑/etc/nova/nova.conf文件

[neutron]

url = http://controller:9696

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 123456

service_metadata_proxy = true

metadata_proxy_shared_secret = METADATA_SECRET

###创建链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

###同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file

###启动服务

[root@controller ~]# systemctl restart openstack-nova-api

[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

如果选择了Self-service网络,还需要启动这个服务

[root@controller ~]# systemctl start neutron-l3-agent.service

[root@controller ~]# systemctl enable neutron-l3-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.

compute节点

安装软件包

[root@compute ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

编辑配置/etc/neutron/neutron.conf文件

[DEFAULT]

transport_url = rabbit://openstack:123456@controller

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

配置provider网络

编辑配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

[linux_bridge]

physical_interface_mappings = provider:eth1

[vxlan]

enable_vxlan = false

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置Self-service网络

编辑配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

[linux_bridge]

physical_interface_mappings = provider:eth1

[vxlan]

enable_vxlan = true

local_ip = 192.168.253.190

l2_population = true

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置nova配置/etc/nova/nova.conf文件

[neutron]

url = http://controller:9696

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 123456

启动服务

[root@compute ~]# systemctl restart openstack-nova-compute

[root@compute ~]# systemctl start neutron-linuxbridge-agent.service

[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

验证

[root@controller ~]# openstack network agent list


八、安装dashboard

controller节点

安装软件包

[root@controller ~]# yum install -y openstack-dashboard

编辑配置文件/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', 'localhost']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': 'controller:11211',

}

}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

"identity": 3,

"image": 2,

"volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {

...

'enable_router': False,

'enable_quotas': False,

'enable_distributed_router': False,

'enable_ha_router': False,

'enable_lb': False,

'enable_firewall': False,

'enable_vpn': False,

'enable_fip_topology_check': False,

}

编辑/etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

启动服务

[root@controller ~]# systemctl restart httpd.service memcached.service

九、验证

浏览器打开192.168.253.188/dashboard






3、配置nginx反向代理tomcat,并使用redis实现会话保持。


环境:

一台nginx+redis :192.168.2.198

一台tomcat1:192.168.2.197

一台tomcat2:192.168.2.199

一、2台Tomcat配置java环境及发布目录

1、jdk安装

#rpm -ivh jdk-8u151-linux-x64.rpm

2、tomcat下载、解压

tomcat 下载地址:http://tomcat.apache.org/

#tar xf apache-tomcat-7.0.78.tar.gz -C /usr/local/

#cd /usr/local/

#ln -s apache-tomcat-7.0.78 tomcat

3、配置path环境变量

#vim /etc/profile.d/tomcat.sh

export CATALINA_BASE=/usr/local/tomcat

export PATH=$CATALINA_BASE/bin:$PATH

#chmod +x /etc/profile.d/tomcat.sh # 赋予可执行权限

#source /etc/profile.d/tomcat.sh

4、创建测试网页的发布目录

#mkdir /usr/local/tomcat/webapps/test/

5、创建测试页面(197上创建tomcatA ,199上创建tomcatB)

[root@localhost conf]#vim /usr/local/tomcat_a/webapps/test/index.jsp

<%@ page language="java" %>

<html>

<head><title>TomcatA</title></head>

<body>

<h1><font color="red">TomcatA.linuxinfo.top</font></h1>

<tablealign="centre" border="1">

<tr>

<td>SessionID</td>

<% session.setAttribute("linuxinfo.top","linuxinfo.top");%>

<td><%=session.getId() %></td>

</tr>

<tr>

<td>Createdon</td>

<td><%=session.getCreationTime() %></td>

</tr>

</table>

</body>

</html>

6、分别启动tomcat

#catalina.sh start                #可也用全路径/usr/local/tomcat/bin/startup.sh

7、访问192.168.2.197,测试如下,查看session ID

二、配置nginx负载均衡

#vim /usr/local/nginx/conf/nginx.conf

upstream tomcat{

server 192.168.2.197:8080;

server 192.168.2.199:8080;

}

server {

listen 80;

server_name localhost;

location / {

proxy_pass http://192.168.2.198;

}

}

三、Redis 配置

1、下载相关软件包,放在 /usr/local/tomcat/lib。

asm-5.2.jar

kryo-3.0.3.jar

kryo-serializers-0.45.jar

memcached-session-manager-2.3.2.jar

memcached-session-manager-tc8-2.3.2.jar

minlog-1.3.1.jar

msm-kryo-serializer-2.3.2.jar

objenesis-2.6.jar

reflectasm-1.11.9.jar

spymemcached-2.12.3.jar

jedis.jar

2、安装Redis

# yum install redis

# vim /etc/redis.conf

bind 0.0.0.0

# systemctl start redis

3、编辑Redis配置文件

#vim /usr/local/tomcat/conf/context.xml

<Context>

...

<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"

memcachedNodes="redis://192.168.2.198:6379"

sticky="false"

sessionBackupAsync="false"

lockingMode="uriPattern:/path1|/path2"

requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"

transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"

/>

</Context>

4、测试访问代理服务192.168.2.198,查看session ID 保持不变,实现会话保持

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,684评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,143评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,214评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,788评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,796评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,665评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,027评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,679评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,346评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,664评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,766评论 1 331
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,412评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,015评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,974评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,073评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,501评论 2 343