Openstack 安装

1.环境准备

centos7

1.1、yum安装设置

  • yum list |grep openstack

    centos-release-openstack-newton.noarch 1-2.el7 extras
    centos-release-openstack-ocata.noarch 1-2.el7 extras
    centos-release-openstack-pike.x86_64 1-1.el7 extras
    centos-release-openstack-queens.x86_64 1-1.el7.centos extras
  • yum install centos-release-openstack-queens.x86_64 1-1.el7.centos -y 此时会在/etc/yum.repo.d/下产生Openstack的yum源配置

1.2、OpenStack 客户端

  yum install python-openstackclient -y
  yum install openstack-selinux -y

2.安装

2.1、mariadb数据库的安装

OpenStack使用数据库来存储,支持大部分数据库MariaDB或、MySQL或者PostgreSQL,数据库运行于控制节点。

  • 卸载原版本mysql
    rpm -qa|grep mariadb
    rpm -e --nodeps mysql-community-common-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-libs-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-client-5.7.9-1.el7.x86_64.rpm
    rpm -e --nodeps mysql-community-server-5.7.9-1.el7.x86_64.rpm
    
  • 安装mysql
     yum install mariadb mariadb-server python2-PyMySQL  -y
    
  • 修改配置(/etc/my.cnf.d/mariadb-server.cnf)
      [mysqld]
      bind-address = 10.20.16.229
      default-storage-engine = innodb
      innodb_file_per_table = on
      max_connections = 4096
      collation-server = utf8_general_ci
      character-set-server = utf8
      # 目录预先规划
      datadir=/data/openstack/mysql/data
      socket=/data/openstack/mysql/mysql.sock
      log-error=/data/openstack/mysql/log/mariadb.log
      pid-file=/data/openstack/mysql/mariadb.pid
    
    
  • 修改工作目录属组
      chown mysql:mysql -R /data/openstack/mysql
    
  • 启动
      systemctl enable mariadb.service
      systemctl start mariadb.service
    
  • 执行初始化设置
      #账号初始化
      mysql_secure_installation
      #远程访问设置(用于后期其他节点连接)
      GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'ips';
    

2.1、rabbitmq数据库的安装

  • 卸载老版本(略、、、)
  • 安装
    yum install rabbitmq-server -y
    
  • 设置账号和权限
    # 此处RABBIT_PASS 设置为ips
    rabbitmqctl add_user openstack RABBIT_PASS
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    
  • 创建工作目录
    mkdir -p /data/openstack/rabbitmq
    chown rabbitmq:rabbitmq -R rabbitmq
    
  • 修改启动文件(/usr/lib/systemd/system/rabbitmq-server.service)
    Environment=RABBITMQ_LOG_BASE=/data/openstack/rabbitmq/log
    WorkingDirectory=/data/openstack/rabbitmq/data
    
  • 启动
    systemctl enable rabbitmq-server.service
    systemctl start rabbitmq-server.service
    
  • 为方便管理可以启用相关插件(属于rabbitmq相关,不详述)
     rabbitmq-plugins enable rabbitmq_management 
     systemctl restart rabbitmq-server
     登录(http://ip:15672/) 
     注意:用户必须拥有admin权限
    

2.3、Memcached的安装

  • 卸载老版本(略、、、)
  • 安装
    yum install memcached python-memcached -y
    
  • 修改配置文件(/etc/sysconfig/memcached )
    PORT="11211"
    USER="memcached"
    MAXCONN="1024"
    CACHESIZE="64"
    #主要增加controller
    OPTIONS="-l 127.0.0.1,::1,controller"
    
  • 启动
    systemctl enable memcached.service
    systemctl start memcached.service
    

2.4、身份认证服务keytone(控制节点)

  • 创建存储
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'  IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'ips';
    
  • 安装相关包

    yum install openstack-keystone httpd mod_wsgi -y
    
  • 配置keystone(编辑文件 /etc/keystone/keystone.conf)
    /etc/keystone/keystone.conf

     [database]
     ···
     connection = mysql+pymysql://keystone:ips@controller/keystone
    
     [token]
     ...
     provider = uuid
    
  • 初始化身份认证服务的数据库和Fernet

      su -s /bin/sh -c "keystone-manage db_sync" keystone
      keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
      keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    
  • Bootstrap the Identity service:

      # 在Queens版本中只需要一个端口(5000),就用于所有接口,以前的版本中中5000用于普通接口,35357仅负责管理服务,该换此处ADMIN_PASS为ips
      keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
        --bootstrap-admin-url http://controller:5000/v3/ \
        --bootstrap-internal-url http://controller:5000/v3/ \
        --bootstrap-public-url http://controller:5000/v3/ \
        --bootstrap-region-id RegionOne
    
  • 配置Apache HTTP 服务器(/etc/httpd/conf/httpd.conf)
    vim /etc/httpd/conf/httpd.conf

     ServerName controller  
    

    cp -f /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

       #主要修改日志产生的路径
        Listen 5000
        Listen 35357
    
       <VirtualHost *:5000>
          WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-public
          WSGIScriptAlias / /usr/bin/keystone-wsgi-public
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          <IfVersion >= 2.4>
            ErrorLogFormat "%{cu}t %M"
          </IfVersion>
          ErrorLog /data/openstack/httpd/keystone-error.log
          CustomLog /data/openstack/httpd/keystone-access.log combined
    
          <Directory /usr/bin>
              <IfVersion >= 2.4>
                  Require all granted
              </IfVersion>
              <IfVersion < 2.4>
                  Order allow,deny
                  Allow from all
              </IfVersion>
          </Directory>
      </VirtualHost>
    
      <VirtualHost *:35357>
          WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
          WSGIProcessGroup keystone-admin
          WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
          WSGIApplicationGroup %{GLOBAL}
          WSGIPassAuthorization On
          <IfVersion >= 2.4>
            ErrorLogFormat "%{cu}t %M"
          </IfVersion>
          ErrorLog /data/openstack/httpd/keystone-error.log
          CustomLog /data/openstack/httpd/keystone-access.log combined
    
          <Directory /usr/bin>
              <IfVersion >= 2.4>
                  Require all granted
              </IfVersion>
              <IfVersion < 2.4>
                 Order allow,deny
                  Allow from all
              </IfVersion>
          </Directory>
      </VirtualHost>
    
  • 创建admin-rc文件,并写入一下内容

    export OS_USERNAME=admin
    export OS_PASSWORD=ips
    export OS_PROJECT_NAME=admin
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_AUTH_URL=http://controller:35357/v3
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2
    
  • 创建domain、project、user和role

     # 创建domain,默认情况下已有domain:default
      openstack domain create --description "An Example Domain" example
      openstack project create --domain default  --description "Service Project" service
     # 创建project
      openstack project create --domain default --description "Demo Project" demo
     # 创建project,此时设置密码为ips
      openstack user create --domain default   --password-prompt demo
     # 创建role,此时需设置为ipsrole
      openstack role create user
     # 绑定user、role、project三者关系
      openstack role add --project demo --user demo user 
     #  验证
      unset OS_AUTH_URL OS_PASSWORD
      openstack --os-auth-url http://controller:35357/v3 \
    --os-project-domain-name Default --os-user-domain-name Default \
    --os-project-name admin --os-username admin token issue
    
  • 创建client 访问配置admin-rc已创建,下面创建demo

      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_USER_DOMAIN_NAME=Default
      export OS_PROJECT_NAME=demo
      export OS_USERNAME=demo
      export OS_PASSWORD=ips
      export OS_AUTH_URL=http://controller:5000/v3
      export OS_IDENTITY_API_VERSION=3
      export OS_IMAGE_API_VERSION=2
    ``
    
    
  • 本节QA
    QA1:Error: Package: perl-DBD-MySQL-4.023-5.el7.x86_64 (@base)

       rpm -ivh mysql-community-libs-compat-5.7.18-1.el7.x86_64.rpm
    

    QA2:Missing value auth-url required for auth plugin password

    source admin-rc
    

    QA3:Invalid command 'WSGIDaemonProcess', perhaps misspelled or defined by a module not included in the server configuration

     # 安装说明中有,但是有时候为了处理httpd的问题,卸载httpd会同事卸载该组件,安装时需一并安装
     yum install apache2-mod_wsgi
    

    QA4:The request you have made requires authentication. (HTTP 401) (Request-ID: req-9a49935d-49a6-4673-ae3b-193d53eb0444)

     # 安装过程中难免有错误,当回头处理问题时,一种可能是修改过密码,另一种情况是是之前的执行尚未生效
         keystone-manage bootstrap --bootstrap-password ips \
    --bootstrap-admin-url http://controller:5000/v3/ \
    --bootstrap-internal-url http://controller:5000/v3/ \
    --bootstrap-public-url http://controller:5000/v3/ \
    --bootstrap-region-id RegionOne
    

2.3、镜像服务glance(控制节点)

  • 创建存储
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'ips';
    
  • 创建openstack中的用户glance
      # 创建user ,此时设置密码为ips
      openstack user create --domain default --password-prompt glance
      # 给glance赋予service权限和admin角色
      openstack role add --project service --user glance admin
      # 创建service和endpoints,用于镜像,
      openstack service create --name glance  --description "OpenStack Image" image
      openstack endpoint create --region RegionOne image public http://controller:9292
      openstack endpoint create --region RegionOne image internal http://controller:9292
      openstack endpoint create --region RegionOne image admin http://controller:9292
    
  • 安装相关软件包
    yum install openstack-glance -y
    
  • 修改配置文件
    /etc/glance/glance-api.conf
     [database]
     connection = mysql+pymysql://glance:ips@controller/glance
    
     [keystone_authtoken]
     auth_uri = http://controller:5000
     auth_url = http://controller:5000
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = Default
     user_domain_name = Default
     project_name = service
     username = glance
     password = ips
    
     [paste_deploy]
     flavor = keystone
    
    # 镜像存储方式和位置
     [glance_store]
     stores = file,http
     default_store = file
     filesystem_store_datadir = /data/openstack/glance/images/
    
    /etc/glance/glance-registry.conf
     [database]
     connection = mysql+pymysql://glance:ips@controller/glance
     [keystone_authtoken]
     auth_uri = http://controller:5000
     auth_url = http://controller:5000
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = Default
     user_domain_name = Default
     project_name = service
     username = glance
     password = ips
    
    [paste_deploy]
     flavor = keystone
    
  • 创建工作目录
     mkdir -p /data/openstack/glance/images/
     mkdir -p /data/openstack/glance/log/
     chown glance:glance -R /data/openstack/glance
    
  • 初始化glance数据库
     su -s /bin/sh -c "glance-manage db_sync" glance
    
  • 修改openstack-glance-api.service和openstack-glance-registry.service 统一存储日志,并启动
     # 主要是重新指定日志的存储位置
     ExecStart=/usr/bin/glance-api --log-dir /data/openstack/glance/log/
     ExecStart=/usr/bin/glance-registry --log-dir /data/openstack/glance/log/
     #启动
     systemctl daemon-reload 
     systemctl enable openstack-glance-api.service openstack-glance-registry.service
     systemctl start openstack-glance-api.service openstack-glance-registry.service
    
  • 验证
     # 下载测试镜像
     wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
     # 导入镜像
      openstack image create "cirros" \
      --file cirros-0.3.5-x86_64-disk.img \
      --disk-format qcow2 --container-format bare \
      --public
     # 查看镜像
      openstack image list
    # 拉取qcow2
      wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
    # 导入镜像
    openstack image create "CentOS7" \
      --file CentOS-7-x86_64-GenericCloud.qcow2 \
      --disk-format qcow2 --container-format bare \
      --public
    

2.4、Compute 服务(nova)

2.4.1、控制节点安装

  • 创建存储
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ips';
    flush privileges;
    
  • 创建openstack中的用户nova
      # 创建user ,此时设置密码为ips
      openstack user create --domain default --password-prompt nova
      # 给nova赋予service权限和admin角色
      openstack role add --project service --user nova admin
      # 创建service和endpoints,用于镜像,
      openstack service create --name nova  --description "OpenStack Compute" compute
      openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
      openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
      openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
    
  • 创建openstack中的用户placement
      # 创建user ,此时设置密码为ips
      openstack user create --domain default --password-prompt placement
      # 给placement赋予service权限和admin角色
      openstack role add --project service --user placement admin
      # 创建service和endpoints,用于镜像,
      openstack service create --name placement --description "Placement API" placement
      openstack endpoint create --region RegionOne placement public http://controller:8778
      openstack endpoint create --region RegionOne placement internal http://controller:8778
      openstack endpoint create --region RegionOne placement admin http://controller:8778
    
  • 在控制节点上安装相关软件包
    yum install openstack-nova-api openstack-nova-conductor \
    openstack-nova-console openstack-nova-novncproxy \
    openstack-nova-scheduler openstack-nova-placement-api  -y
    
  • 创建工作目录
     mkdir -p /data/openstack/nova/
     chown nova:nova -R /data/openstack/nova
    
  • 修改配置文件(/etc/nova/nova.conf )
     [DEFAULT]
     # ...
     enabled_apis = osapi_compute,metadata
     transport_url = rabbit://openstack:ips@controller
     my_ip = 10.20.16.229
     use_neutron = True
     firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
     [api_database]
     # ...
     connection = mysql+pymysql://nova:ips@controller/nova_api
    
     [database]
     # ...
     connection = mysql+pymysql://nova:ips@controller/nova
    
     [api]
     # ...
     auth_strategy = keystone
    
     [keystone_authtoken]
     # ...
     auth_url = http://controller:5000/v3
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     project_name = service
     username = nova
     password = ips
    
     [vnc]
     enabled = true
     # ...
     server_listen = $my_ip
     server_proxyclient_address = $my_ip
    
     [glance]
     # ...
     api_servers = http://controller:9292
    
     [oslo_concurrency]
     # ...
     lock_path = /data/openstack/nova/tmp
    
     [placement]
     # ...
     os_region_name = RegionOne
     project_domain_name = Default
     project_name = service
     auth_type = password
     user_domain_name = Default
     auth_url = http://controller:5000/v3
     username = placement
     password = ips
    
  • 修改配置文件(/etc/httpd/conf.d/00-nova-placement-api.conf)并重启httpd
     #官方BUG,增加配置
     <Directory /usr/bin>
        <IfVersion >= 2.4>
           Require all granted
        </IfVersion>
        <IfVersion < 2.4>
           Order allow,deny
           Allow from all
        </IfVersion>
     </Directory>
     # 重启
     systemctl restart httpd
    
  • 初始化nova数据库,并验证
     # 初始化
     su -s /bin/sh -c "nova-manage api_db sync" nova
     su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
     su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
     su -s /bin/sh -c "nova-manage db sync" nova
     # 验证
     nova-manage cell_v2 list_cells
    
  • 修改openstack-nova-*.service相关文件,主要统一存储日志,并启动
      # 主要是重新指定日志的存储位置
      # openstack-nova-api.service
      ExecStart=/usr/bin/nova-api --log-dir /data/openstack/nova/log/
      # openstack-nova-consoleauth.service
      ExecStart=/usr/bin/nova-consoleauth --log-dir /data/openstack/nova/log/
      # openstack-nova-scheduler.service
      ExecStart=/usr/bin/nova-scheduler --log-dir /data/openstack/nova/log/
      # openstack-nova-conductor.service
      ExecStart=/usr/bin/nova-conductor  --log-dir /data/openstack/nova/log/
      # openstack-nova-novncproxy.service
      ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS --log-dir /data/openstack/nova/log/
    
     #启动
     systemctl daemon-reload 
     systemctl start openstack-nova-api.service \
      openstack-nova-consoleauth.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
    
  • 本节QA
    QA1:官方BUG:修改配置文件/etc/httpd/conf.d/00-nova-placement-api.conf:

2.4.2、计算节点安装

  • 在计算节点上安装相关软件包

    yum install openstack-nova-compute -y
    
  • 更改配置文件( /etc/nova/nova.conf)

    [DEFAULT]
      # ...
     verbose = True
     #替换为计算节点上的管理网络接口的IP 地址,例如 :ref:example architecture <overview-example-architectures>`中所示的第一个节点 10.0.0.31 。
     my_ip = 10.20.16.228
     enabled_apis = osapi_compute,metadata
     transport_url= rabbit://openstack:ips@controller
     use_neutron = True
     firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
    [api]
    # ...
    auth_strategy = keystone
    
    [keystone_authtoken]
    # ...
    auth_url = http://controller:5000/v3
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = ips
    
    [vnc]
    # ...
    enabled = True
    #服务器组件监听所有的 IP 地址
    vncserver_listen = 0.0.0.0 
    #代理组件仅仅监听计算节点管理网络接口的 IP 地址
    vncserver_proxyclient_address = $my_ip 
    #使用 web 浏览器访问位于该计算节点上实例的远程控制台的位置
    novncproxy_base_url = http://controller:6080/vnc_auto.html
    
    [glance]
    # ...
    api_servers = http://controller:9292
    
     #配置锁路径
    [oslo_concurrency]
    # (可选的)为帮助排错,在 “[DEFAULT]”部分启用详细日志(verbose = True)。
    lock_path = /data/openstack/nova/tmp
    [placement]
    # ...
    os_region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:5000/v3
    username = placement
    password = ips
    
  • 查看CPU核数,确认是否支持CPU加速

      egrep -c '(vmx|svm)' /proc/cpuinfo
      #如果这个命令返回 >1的值,说明计算节点支持硬件加速。如果等于0 ,需要在/etc/nova/nova.conf中修改virt_type为QEMU,否则KVM。
      [libvirt]
      ...
      virt_type = qemu
     
    
  • 修改日志目录,启动计算服务

      # openstack-nova-compute.service
       ExecStart=/usr/bin/nova-compute --log-dir /data/openstack/nova/compute
      # 启动
       systemctl  daemon-reload
       systemctl enable libvirtd.service openstack-nova-compute.service
       systemctl start libvirtd.service openstack-nova-compute.service
    
  • 将新的计算节点加入到库中(cell )

      openstack compute service list --service nova-compute
      # 新增节点此处都要执行
      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      #当然如果不想手动执行,可以在 /etc/nova/nova.conf配置定时扫描发现
      [scheduler]
      discover_hosts_in_cells_interval = 300
    
  • 本节QA

2.5、网络服务neutron

2.5.1、控制节点

  • 创建存储
    CREATE DATABASE neutron;
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ips';
     flush privileges;
    
  • 创建openstack中的用户neutron
      # 创建user ,此时设置密码为ips
      openstack user create --domain default --password-prompt neutron
      # 给neutron赋予service权限和admin角色
      openstack role add --project service --user neutron admin
      # 创建service和endpoints,用于镜像,
      openstack service create --name neutron  --description "OpenStack Networking" network
      openstack endpoint create --region RegionOne  network public http://controller:9696
      openstack endpoint create --region RegionOne  network internal http://controller:9696
      openstack endpoint create --region RegionOne  network admin http://controller:9696
    
  • 安装软件包(Provider networks)
     yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
    
  • 修改配置
    /etc/neutron/neutron.conf
     [DEFAULT]
     # ...
     core_plugin = ml2
     service_plugins =
     transport_url = rabbit://openstack:ips@controller
     auth_strategy = keystone
     notify_nova_on_port_status_changes = true
     notify_nova_on_port_data_changes = true
    
     [keystone_authtoken]
     # ...
     auth_uri = http://controller:5000
     auth_url = http://controller:35357
     memcached_servers = controller:11211
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     project_name = service
     username = neutron
     password = ips
    
     [nova]
     # ...
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = nova
     password = ips
    
     [oslo_concurrency]
     # 预先创建好工作目录 
     # mkdir -p /data/openstack/neutron/lock
     # chown neutron:neutron -R /data/openstack/neutron
     lock_path = lock_path = /data/openstack/neutron/lock
    
    Modular Layer 2 (ML2) plug-in: /etc/neutron/plugins/ml2/ml2_conf.ini
     [ml2]
     # ...
     type_drivers = flat,vlan
     tenant_network_types =
     mechanism_drivers = linuxbridge
     extension_drivers = port_security
    
     [ml2_type_flat]
     # ...
     flat_networks = provider
    
     [securitygroup]
     # ...
     enable_ipset = true
    

    Linux bridge agent: /etc/neutron/plugins/ml2/linuxbridge_agent.ini

     [linux_bridge]
     physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    
     [vxlan]
     enable_vxlan = false
    
     [securitygroup]
     # ...
     enable_security_group = true
     firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
    iptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -p
       net.bridge.bridge-nf-call-ip6tables = 1
       net.bridge.bridge-nf-call-iptables = 1
    
    DHCP agent: /etc/neutron/dhcp_agent.ini
     [DEFAULT]
     # ...
     interface_driver = linuxbridge
     dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
     enable_isolated_metadata = true
    
    metadata agent : /etc/neutron/metadata_agent.ini
     [DEFAULT]
     # ...
     nova_metadata_host = controller
     metadata_proxy_shared_secret = ips
    
    /etc/nova/nova.conf (不要改变以前的配置过的)
     [neutron]
     url = http://controller:9696
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = neutron
     password = ips
     service_metadata_proxy = true
     metadata_proxy_shared_secret = ips
    
  • 配置连接到指定文件
      ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    
  • 初始化数据库
      su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    
  • 由于刚才更改nova的配置文件,需重启
       systemctl restart openstack-nova-api.service
    
  • 修改启动配置,并启动
    # /usr/lib/systemd/system/neutron-server.service
    ExecStart=/usr/bin/neutron-server \ 
    --config-file /usr/share/neutron/neutron-dist.conf  \ 
    --config-dir /usr/share/neutron/server \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugin.ini \ 
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-server \ 
    --log-file /data/openstack/neutron/log/server.log
    
    # /usr/lib/systemd/system/neutron-linuxbridge-agent.service
    ExecStart=/usr/bin/neutron-linuxbridge-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent \ 
    --log-file /data/openstack/neutron/log/linuxbridge-agent.log
    
    # /usr/lib/systemd/system/neutron-dhcp-agent.service
    ExecStart=/usr/bin/neutron-dhcp-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \
    --config-file /etc/neutron/neutron.conf  \ 
    --config-file /etc/neutron/dhcp_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-dhcp-agent \ 
    --log-file /data/openstack/neutron/log/dhcp-agent.log
    
    # /usr/lib/systemd/system/neutron-metadata-agent.service
    ExecStart=/usr/bin/neutron-metadata-agent \
    --config-file /usr/share/neutron/neutron-dist.conf \
    --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/metadata_agent.ini \
    --config-dir /etc/neutron/conf.d/common \
    --config-dir /etc/neutron/conf.d/neutron-metadata-agent \
    --log-file /data/openstack/neutron/log/metadata-agent.log
    
    # 启动
    systemctl daemon-reload
    systemctl start neutron-server.service \
    neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    neutron-metadata-agent.service
    

2.5.2、计算节点

  • 安装计算节点上相关软件包
      yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
    
  • 更改配置文件
    /etc/neutron/neutron.conf
    [DEFAULT]
    ...
    #RabbitMQ消息队列访问
    transport_url = rabbit://openstack:ips@controller
    
    #配置认证服务访问
    auth_strategy = keystone
    verbose = True
    
    [keystone_authtoken]
    # ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = ips
    
     #配置锁路径:
    [oslo_concurrency]
       ...
    #(可选的)为帮助排错,在 “[DEFAULT]”部分启用详细日志(verbose = True)。
    lock_path = /data/openstack/neutron/tmp
    
    #注释所有``connection`` 项,因为计算节点不直接访问数据库
    [database]
    
    
    Linux bridge agent:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [linux_bridge]
    physical_interface_mappings = provider:eno1
    [vxlan]
    enable_vxlan = false
    [securitygroup]
    # ...
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
    iptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -p
       net.bridge.bridge-nf-call-ip6tables = 1
       net.bridge.bridge-nf-call-iptables = 1
    
    /etc/nova/nova.conf
     [neutron]
     url = http://controller:9696
     auth_url = http://controller:35357
     auth_type = password
     project_domain_name = default
     user_domain_name = default
     region_name = RegionOne
     project_name = service
     username = neutron
     password = ips
    
  • 由于修改了nova配置,重启计算服务
      # systemctl restart openstack-nova-compute.service
    
  • 修改启动配置,启动Linux桥接代理并配置它开机自启动
    # /usr/lib/systemd/system/neutron-linuxbridge-agent.service 其中目录提前创建
    # mkdir -p /data/openstack/neutron/log
    # chown neutron:neutron -R /data/openstack/neutron
    ExecStart=/usr/bin/neutron-linuxbridge-agent \ 
    --config-file /usr/share/neutron/neutron-dist.conf \ 
    --config-file /etc/neutron/neutron.conf \ 
    --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini \ 
    --config-dir /etc/neutron/conf.d/common \ 
    --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent \ 
    --log-file /data/openstack/neutron/log/linuxbridge-agent.log
    #  启动
    systemctl  daemon-reload
    systemctl enable neutron-linuxbridge-agent.service
    systemctl start neutron-linuxbridge-agent.service
    
  • 验证
     openstack extension list --network
    

2.5创建实例

  • Flavor

  • 本节QA
      QA1:创建server时,在nova-conductor.log中,报如下错误:

    2018-05-15 11:45:10.816 5547 ERROR   oslo_messaging.rpc.server MessageDeliveryFailure: Unable to connect to AMQP   server on controller:5672 after None tries: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
    

    解决办法:https://blog.silversky.moe/works/openstack-lanuch-instance-infinite-scheduling

     su -s /bin/sh -c "nova-manage db sync" nova
     如果仍有问题,到库中确认配置是否正确
     SELECT * FROM `nova_api`.`cell_mappings` WHERE `created_at` LIKE BINARY '%openstack%' OR `updated_at` LIKE BINARY '%openstack%' OR `id` LIKE BINARY '%openstack%' OR `uuid` LIKE BINARY '%openstack%' OR `name` LIKE BINARY '%openstack%' OR `transport_url` LIKE BINARY '%openstack%' OR `database_connection` LIKE BINARY '%openstack%' ;
    

    此外,即便配置正确在使用openstack4j 拿去token时也会包该问题

       su -s /bin/sh -c "nova-manage db sync" nova
    

    QA2:创建服务时,{u'message': u'No valid host was found. ', u'code': 500, u'created': u'2018-05-17T02:22:47Z'

       管理员给这个工程的资源配额是最多创建10个实例,最多使用20个vcpu,
       最多使用5G的内存,只要达到某一个资源的使用上限,就会出现异常,这就是配额管理。
        # 修改默认配置
       openstack quota set c5ba590cab874f55b1668bad5cd2a6a6 --instances 30 --cores 90 --ram 204800
     
    

    QA3:Build of instance 00b69820-ef36-447c-82ca-7bdec4c70ed2 was re-scheduled: invalid argument: could not find capabilities for domaintype=kvm

      # kvm 被 BIOS 禁用了
       dmesg | grep kvm
      重启进入设置即可
    

2.6、dashboard安装

  • 安装软件包
      yum install openstack-dashboard -y
    
  • 更改配置文件(/etc/openstack-dashboard/local_settings)
      #配置控制节点,来使用 OpenStack 服务
      OPENSTACK_HOST = "controller"
      #允许所有主机访问仪表板
      ALLOWED_HOSTS = ['*', ]
      #配置 memcached 会话存储服务
       CACHES = {
         'default': {
             'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
             'LOCATION': 'controller:11211',
          }
        }
      #为通过仪表盘创建的用户配置默认的 user 角色
       OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
      #启用multi-domain model
      OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
      #配置服务API版本,这样你就可以通过Keystone V3 API来登录dashboard
      OPENSTACK_API_VERSIONS = {
          "identity": 3,
          "volume": 2,
          "image": 2
      }
     #如果您选择网络参数1,禁用支持3层网络服务
      OPENSTACK_NEUTRON_NETWORK = {
          ...
        'enable_router': False,
        'enable_quotas': False,
        'enable_distributed_router': False,
        'enable_ha_router': False,
        'enable_lb': False,
        'enable_firewall': False,
        'enable_vpn': False,
        'enable_fip_topology_check': False,
    }
    #可以选择性地配置时区
    TIME_ZONE = "Asia/Shanghai"
    
  • 启动web 服务器和会话存储服务,并配置它们随系统启动
     # systemctl enable httpd.service memcached.service
     # systemctl restart httpd.service memcached.service
    

2.6、块设备存储服务cinder (控制节点和计算节点)

2.6.1、控制节点

* 创建存储
  ```
    CREATE DATABASE cinder;
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'  IDENTIFIED BY 'ips';
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'   IDENTIFIED BY 'ips';
    flush privileges;
  ```
  • 创建openstack中的用户cinder
       # 创建user ,此时设置密码为ips
       openstack user create --domain default --password-prompt cinder;
       # 给cinder 赋予service权限和admin角色
       openstack role add --project service --user cinder admin;
       # 创建cinderv2 和 cinderv3 服务
       openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2;
       openstack service create --name cinderv3  --description "OpenStack Block Storage" volumev3;
      # 创建service和endpoints,用于镜像
       openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
       openstack endpoint create --region RegionOne  volumev2 public http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne  volumev2 admin http://controller:8776/v2/%\(project_id\)s
       openstack endpoint create --region RegionOne  volumev3 public http://controller:8776/v3/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
       openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    
  • 安装cinder
     yum install openstack-cinder -y
    
  • 修改配置文件 :/etc/cinder/cinder.conf
     [DEFAULT]
     # ...
     transport_url = rabbit://openstack:ips@controller
     auth_strategy = keystone
    
     [keystone_authtoken]
     # ...
     auth_uri = http://controller:5000
     auth_url = http://controller:35357
     memcached_servers = controller:11211
     auth_type = password
     project_domain_id = default
     user_domain_id = default
     project_name = service
     username = cinder
     password = ips
    
     [database]
     # ...
     connection = mysql+pymysql://cinder:ips@controller/cinder
    
     # 目录预先创建
     # mkdir -p /data/openstack/cinder/tmp
     # chown cinder:cinder -R /data/openstack/cinder
     [oslo_concurrency]
     # ...
     lock_path = /data/openstack/cinder/tmp
    
    
  • 修改配置文件并重启 :/etc/nova/nova.conf
     [cinder]
     os_region_name = RegionOne
    
  • 重启nova
      systemctl restart openstack-nova-api.service
    
  • 初始化数据结构
       su -s /bin/sh -c "cinder-manage db sync" cinder
    
  • 修改启动配置:主要为了归档日志
     # openstack-cinder-api.service
     ExecStart=/usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf --  config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/api.log
     # openstack-cinder-scheduler.service
      ExecStart=/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/scheduler.log
    
  • 启动cinder
     systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    

2.6.2、计算节点

  • 增加LVM支持,安装相关组件
       yum install lvm2 device-mapper-persistent-data  openstack-cinder targetcli python-keystone -y
      # 启动
      # systemctl enable lvm2-lvmetad.service
      # systemctl start lvm2-lvmetad.service
    
  • 为块存储服务创建物理卷(cinder 会在这个卷组中创建逻辑卷)
      # 提前准备好分区nvme0n1p4
      pvcreate /dev/nvme0n1p4
      vgcreate cinder-volumes /dev/nvme0n1p4
    
  • 修改配置文件/etc/lvm/lvm.conf
       devices {
       ...
       #此处配置一定要正确不然会导致cinder-volume的State为down
       filter =[ "a|^/dev/nvme0n1p4$|","r|.*/|" ]
    
  • 更改配置文件(/etc/cinder/cinder.conf)
      [DEFAULT]
      # ...
      #RabbitMQ消息队列访问
      rpc_backend = rabbit://openstack:ips@controller
      #配置认证服务访问
      auth_strategy = keystone
      my_ip = 10.20.16.227
      # 启用 LVM 后端
      enabled_backends = lvm
      #配置锁路径
      lock_path = /data/openstack/cinder/tmp
      #启用详细日志
      verbose = True
      #配置镜像服务的位置
      glance_api_servers = http://controller:9292
    
      #配置数据库访问
      [database]
      ...
      connection = mysql://cinder:ips@controller/cinder #替换 CINDER_DBPASS
    
    #配置认证服务访问,注释或者删除其他选项
    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    auth_plugin = password
    project_domain_id = default
    user_domain_id = default
    project_name = service
    username = cinder
    password = CINDER_PASS #cinder用户选择的密码
    
    #配置LVM后端以LVM驱动结束,卷组``cinder-volumes`` ,iSCSI 协议和正确的 iSCSI服务,在[DEFAULT]中启用
    [lvm]
    ...
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    volume_group = cinder-volumes
    iscsi_protocol = iscsi
    iscsi_helper = lioadm
    
  • 启动块存储卷服务及其依赖的服务,并将其配置为随系统启动
      # systemctl enable openstack-cinder-volume.service target.service
      # systemctl start openstack-cinder-volume.service target.service
    

CentOS 镜像

  • 设置固定root密码
      virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2 --root-password password:root123
    
  • 设置其他用户密码
     [root@host229 openstack]# guestfish --rw -a CentOS-7-x86_64-GenericCloud.qcow2
     ><fs> run 
     ><fs> list-filesystems
     /dev/sda1: xfs
     ><fs> mount /dev/sda1 /
     ><fs> vi /etc/cloud/cloud.cfg
    
    解除root锁定:/etc/cloud/cloud.cfg
    disable_root: 0
    ssh_pwauth:   1
    ······
    system_info:
      default_user:
        name: centos
        lock_passwd: false
        plain_text_passwd: 'root@ips'
    
    增加ssh 登陆支持:/etc/ssh/sshd_config
    Port 22
    #AddressFamily any
    ListenAddress 0.0.0.0
    #ListenAddress ::
    PermitRootLogin yes
    PasswordAuthentication yes
    
  • 导入镜像
     openstack image create "Centos-7" --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare  --public
    
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,324评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,303评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,192评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,555评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,569评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,566评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,927评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,583评论 0 257
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,827评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,590评论 2 320
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,669评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,365评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,941评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,928评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,159评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,880评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,399评论 2 342