K8S集群二进制安装

K8S安装部署文档

一、操作系统基础设置
1.1 机器名称与IP地址规划
机器名称 IP地址
master.example.com 192.168.247.10
master1.example.com 192.168.247.20
container.example.com 192.168.247.30
container1.example.com 192.168.247.40
1.2 主机名称设置
[root@master ~]# hostnamectl  set-hostname master.example.com
[root@master ~]# hostnamectl  
   Static hostname: master.example.com
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 4e0a689632f84aa383440a1c5e69f57a
           Boot ID: e1d1b732f7f9457d941fe5014d7097f8
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64
1.3 设置主机IP地址信息
[root@master ~]# nmcli  connection  modify  ens33  ipv4.addresses  "192.168.247.10/24"
[root@master ~]# nmcli  connection  modify  ens33  ipv4.gateway  192.168.247.2
[root@master ~]# nmcli  connection  modify  ens33  ipv4.dns 192.168.247.2
[root@master ~]# nmcli  connection  modify  ens33  ipv4.method  manual
[root@master ~]# systemctl  restart network
1.4 配置ssh免密码登录登录
1.png
1.5 测试无钥匙登录
[root@master ~]# for i in master master1 container container1 ; do ssh root@$i "hostname";done 
master.example.com
master1.example.com
container.example.com
container1.example.com
1.6 关闭所有机器防火墙
[root@master ~]# for i in master master1 container container1
> do 
> ssh root@$i "systemctl disable firewalld"
> ssh root@$i "systemctl stop firewalld"
> done

1.7 关闭Selinux

[root@master ~]# for i in master master1 container container1;do ssh root@$i  "sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config"; done 
#修改selinux需要重启系统,临时设置为permissive
[root@master ~]# for i in master master1 container container1; do ssh root@$i  "setenforce  0"; done 
# 查看selinux状态
[root@master ~]# for i in master master1 container container1; do ssh root@$i  "getenforce"; done

二、安装证书生成工具
2.1 安装及配置CFSSL
[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master ~]# for i in master master1 container container1; do ssh root@$i  "yum install wget "; done 

[root@master ~]# or i in master master1 container container1; do ssh root@$i  "yum install wget -y "; done 

[root@master ~]# cd /root/work/_src/ 
[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux- amd64
[root@master ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@master ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
三、 K8S安装前准备
3.1 创建K8S安装所需要的目录
[root@master _src]# for host in master master1 container container1 
> do 
> ssh root@$host "mkdir /opt/work/_app/k8s/etcd/{bin,cfg,ssl} -p"
> done

[root@master _src]# for host in master master1 container container1 
> do
> ssh root@$host "mkdir /opt/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p"
> done

[root@master _src]# for host in master master1 container container1 ;do  ssh root@$host "mkdir /opt/work/_data/etcd  -p"; done
3.2 升级操作系统内核

3.10版本内核且缺少 ip_vs_fo.ko 模块,将导致 kube-proxy 无法开启ipvs模式。ip_vs_fo.ko 模块>的最早版本为3.19版本,这个内核版本在 RedHat 系列发行版的常见RPM源中是不存在的

[root@master1 ~]# cat update_kernel.sh 
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y
export Kernel_Version=5.2.0-1
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
reboot
四、安装docker软件
4.1 设置yum安装源

装所需要的包,yum-utils 提供了 yum-config-manager 实用程序, device-mapper-persistent-data 和 lvm2 是 devicemapper 需要的存储驱动程序

[root@master1 ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 
containerd.io.x86_64
[root@master1 ~]# yum install -y yum-utils device-mapper-persistent-data
 lvm2
[root@master1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master1 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y
4.2安装docker软件
[root@master1 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y
4.3 启动docker服务
[root@master1 ~]# systemctl start  docker 
[root@master1 ~]# systemctl enable docker 
五、安装ETCD数据库
5.1 创建ETCD数据需要的证书文件
5.1.1 生成etcd server证书请求json文件
root@master ~]# mkdir -p /opt/work/_src/ssl_etcd
[root@master ~]# cd /opt/work/_src/ssl_etcd
[root@master ssl_etcd]# cat << EOF | tee ca-config.json
> {
>   "signing": {
>     "default": {
>       "expiry": "87600h"
>     },
>     "profiles": {
>       "etcd": {
>          "expiry": "87600h",
>          "usages": [
>             "signing",
>             "key encipherment",
>             "server auth",
>             "client auth"
>         ]
>       }
>     }
>   }
> }
> EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

5.1.2 文件解释

默认策略,指定了证书的有效期是10年(87600h)
etcd策略,指定了证书的用途
signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证
client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

5.1.3 创建 ETCD CA 证书配置文件
[root@master ssl_etcd]# vim ca-config.json 
[root@master ssl_etcd]# cat << EOF | tee ca-csr.json
> {
>     "CN": "etcd CA",
>     "key": {
>         "algo": "rsa",
>         "size": 2048
>     },
>     "names": [
>         {
>             "C": "CN",
>             "L": "Beijing",
>             "ST": "Beijing"
>         }
>     ]
> }
> EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
5.1.4 创建 ETCD SERVER 证书配置文件
[root@master ssl_etcd]# cat << EOF | tee server-csr.json
> {
>     "CN": "etcd",
>     "hosts": [
>     "10.0.0.100",
>     "10.0.0.101",
>     "10.0.0.102",
>     "10.0.0.103"
>     ],
>     "key": {
>         "algo": "rsa",
>         "size": 2048
>     },
>     "names": [
>         {
>             "C": "CN",
>             "L": "Beijing",
>             "ST": "Beijing"
>         }
>     ]
> }
> EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.247.10",
    "192.168.247.20",
    "192.168.247.30",
    "192.168.247.40"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
5.1.5 生成 ETCD CA 证书和私钥
[root@master ssl_etcd]# ls
ca-config.json  ca-csr.json  server-csr.json
[root@master ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/07/14 02:46:41 [INFO] generating a new CA key and certificate from CSR
2019/07/14 02:46:41 [INFO] generate received request
2019/07/14 02:46:41 [INFO] received CSR
2019/07/14 02:46:41 [INFO] generating key: rsa-2048
2019/07/14 02:46:41 [INFO] encoded CSR
2019/07/14 02:46:41 [INFO] signed certificate with serial number 577069807335370346271604321876953840117122975888
[root@master ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server-csr.json
5.1.6 生成 ETCD SERVER 证书和私钥
[root@master ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/07/14 02:46:53 [INFO] generate received request
2019/07/14 02:46:53 [INFO] received CSR
2019/07/14 02:46:53 [INFO] generating key: rsa-2048
2019/07/14 02:46:54 [INFO] encoded CSR
2019/07/14 02:46:54 [INFO] signed certificate with serial number 64732614189681341342412834480728803988033706392
2019/07/14 02:46:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl_etcd]# ls
ca-config.json  ca-csr.json  ca.pem      server-csr.json  server.pem
ca.csr          ca-key.pem   server.csr  server-key.pem
5.1.7 将生成的证书,复制到etchd证书目录
[root@master ssl_etcd]# cp *.pem /opt/work/_app/k8s/etcd/ssl/
5.2 安装ETCD
5.2.1 下载etcd软件
[root@master ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
正在解析主机 github.com (github.com)... 13.229.188.59
正在连接 github.com (github.com)|13.229.188.59|:443... 已连接。
100%[========================================>] 9,690,047    689KB/s 用时 20s    

2019-07-14 02:51:37 (479 KB/s) - 已保存 “etcd- v3.3.10-linux-arm64.tar.gz” [9690047/9690047])

[root@master ~]# ls
etcd-v3.3.10-linux-arm64.tar.gz  work
[root@master ~]# tar -xvf etcd-3.3.10-linux-arm64.tar.gz
5.2.2 创建 ETCD 系统启动文件
##创建 /usr/lib/systemd/system/etcd.service 文件并保存,内容如下:
[root@master ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/opt/work/_app/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/work/_app/k8s/etcd/ssl/server.pem \ 
--key-file=/opt/work/_app/k8s/etcd/ssl/server-key.pem \ 
--peer-cert-file=/opt/work/_app/k8s/etcd/ssl/server.pem \ 
--peer-key-file=/opt/work/_app/k8s/etcd/ssl/server-key.pem \ 
--trusted-ca-file=/opt/work/_app/k8s/etcd/ssl/ca.pem \ 
--peer-trusted-ca-file=/opt/work/_app/k8s/etcd/ssl/ca.pem 
Restart=on-failure 
LimitNOFILE=65536 

[Install] 
WantedBy=multi-user.target
5.2.3 将ETCD启动文件、证书文件、系统启动文件复制到其他节点
[root@master ~]# for host in master master1 container container1; do scp -r /opt/work/_app/k8s/etcd $host:/opt/work/_app/k8s/; done;

[root@master ~]# for N in master master1 container container1; do scp -r /usr/lib/systemd/system/etcd.service root@$N:/usr/lib/systemd/system/etcd.service; done;
5.2.4 ETCD主配置文件设置
master配置文件
## 
[root@master cfg]#  cat etcd.conf  
#[Member]
# ETCD的节点名
ETCD_NAME="etcd00"
# ETCD的数据存储目录
ETCD_DATA_DIR="/opt/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://192.168.247.10:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.10:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.10:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.10:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://192.168.247.10:2380,etcd01=https://192.168.247.20:2380,etcd02=https://192.168.247.30:2380,etcd03=https://192.168.247.40:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
master1配置文件
[root@master1 ~]# cat /opt/work/_app/k8s/etcd/cfg/etcd.conf 
#[Member]
# ETCD的节点名
ETCD_NAME="etcd01"
# ETCD的数据存储目录
ETCD_DATA_DIR="/opt/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://192.168.247.20:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.20:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.20:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.20:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://192.168.247.10:2380,etcd01=https://192.168.247.20:2380,etcd02=https://192.168.247.30:2380,etcd03=https://192.168.247.40:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
container配置文件
[root@container etcd]# cat cfg/etcd.conf 
#[Member]
# ETCD的节点名
ETCD_NAME="etcd02"
# ETCD的数据存储目录
ETCD_DATA_DIR="/opt/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://192.168.247.30:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.30:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.30:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.30:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://192.168.247.10:2380,etcd01=https://192.168.247.20:2380,etcd02=https://192.168.247.30:2380,etcd03=https://192.168.247.40:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
container1配置文件
[root@container1 cfg]# cat etcd.conf 
#[Member]
# ETCD的节点名
ETCD_NAME="etcd03"
# ETCD的数据存储目录
ETCD_DATA_DIR="/opt/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://192.168.247.40:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.40:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.40:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.40:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://192.168.247.10:2380,etcd01=https://192.168.247.20:2380,etcd02=https://192.168.247.30:2380,etcd03=https://192.168.247.40:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
5.2.5 启动etcd数据库服务
[root@container1]# systemctl daemon-reload 
[root@container1]# systemctl enable etcd
[root@container1]# systemctl start etcd
5.2.6 检验ETCD服务运行状态
/opt/work/_app/k8s/etcd/bin/etcdctl --ca-file=/opt/work/_app/k8s/etcd/ssl/ca.pem \
--cert-file=/opt/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/opt/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
5.2.7 检查etcd的成员关系
[root@master cfg]# /opt/work/_app/k8s/etcd/bin/etcdctl \
        --ca-file=/opt/work/_app/k8s/etcd/ssl/ca.pem \
--cert-file=/opt/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/opt/work/_app/k8s/etcd/ssl/server-key.pem  member list

184127ee6bdbd442: name=etcd00 peerURLs=https://192.168.247.10:2380 clientURLs=https://192.168.247.10:2379 isLeader=false
2240f3e2e5c47829: name=etcd03 peerURLs=https://192.168.247.40:2380 clientURLs=https://192.168.247.40:2379 isLeader=false
4853548e286ba12e: name=etcd01 peerURLs=https://192.168.247.20:2380 clientURLs=https://192.168.247.20:2379 isLeader=true
e278b50b03c3fd33:name=etcd02 peerURLs=https://192.168.247.30:2380 clientURLs=https://192.168.247.30:2379 isLeader=false

六、安装网络组件Flannel

6.1 Flannel网络安装
  • Flannel 实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式。Flannel 在 Kubernetes中用于配置第三层(网络层)网络结构。
  • Flannel 负责在集群中的多个节点之间提供第 3 层 IPv4 网络。Flannel 不控制容器如何与主机联网,只负责主机之间如何传输流量。但是,Flannel 确实为 Kubernetes 提供了 CNI 插件,并提供了与 Docker 集成的指导
  • 没有 Flanneld 网络,Node节点间的 pod 不能通信,只能 Node 内通信。
  • 有 Flanneld 服务启动时主要做了以下几步的工作: 从 ETCD 中获取 NetWork 的配置信息划分 Subnet,并在 ETCD 中进行注册,将子网信息记录到 /run/flannel/subnet.env 中
6.2 向ETCD集群写入网段信息
[root@master cfg]# /opt/work/_app/k8s/etcd/bin/etcdctl \
--ca-file=/opt/work/_app/k8s/etcd/ssl/ca.pem \
--cert-file=/opt/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/opt/work/_app/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.247.10:2379,\
https://192.168.247.20:2379,\
https://192.168.247.30:2379,\
https://192.168.247.40:2379"\
set /coreos.com/network/config  '{ "Network": "172.24.0.0/16", "Backend": {"Type": "vxlan"}}'
#输出信息
{ "Network": "172.24.0.0/16", "Backend": {"Type": "vxlan"}}
6.3 下载、安装Flannel软件
[root@master cfg]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@master cfg]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
[root@master cfg]# mv flanneld mk-docker-opts.sh /opt/work/_app/k8s/kubernetes/bin/
6.4 配置Flannel
#创建 /home/work/_app/k8s/kubernetes/cfg/flanneld 文件并保存,写入以下内容
[root@master ~]# cd /opt/work/_app/k8s/kubernetes/cfg/

 [root@master cfg]#  cat flanneld  
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.10:2379,\
https://192.168.247.20:2379,\
https://192.168.247.30:2379,\
https://192.168.247.40:2379\
 -etcd-cafile=/opt/work/_app/k8s/etcd/ssl/ca.pem \
 -etcd-certfile=/opt/work/_app/k8s/etcd/ssl/server.pem \
 -etcd-keyfile=/opt/work/_app/k8s/etcd/ssl/server-key.pem"
6.5 创建Flannel系统启动文件
#创建 /usr/lib/systemd/system/flanneld.service 文件并保存,内容如下
[root@master cfg]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/flanneld
ExecStart=/home/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/home/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

mk-docker-opts.sh 脚本将分配给 Flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 Docker 启动时 使用这个文件中的环境变量配置 docker0 网桥. Flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;

6.6 配置Docker启动指定子网段
#编辑 /usr/lib/systemd/system/docker.service 文件,内容如下:
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd -H $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
6.7 将Flannel相关文件复制到其他机器
#主要复制 Flannel 执行文件、Flannel 配置文件、Flannel 系统启动文件、Docker 系统启动文件
[root@master cfg]# for N in master master1 container container1; do scp -r /opt/work/_app/k8s/kubernetes/* root@$N:/opt/work/_app/k8s/kubernetes/; done;

flanneld                                                     100%   34MB  70.1MB/s   00:00    
mk-docker-opts.sh                                            100% 2139     4.4MB/s   00:00    
flanneld                                                     100%  306   827.1KB/s   00:00    
flanneld                                                     100%   34MB  57.2MB/s   00:00    
mk-docker-opts.sh                                            100% 2139     3.2MB/s   00:00    
flanneld                                                     100%  306   524.5KB/s   00:00    
flanneld                                                     100%   34MB  43.0MB/s   00:00    
mk-docker-opts.sh                                            100% 2139     2.6MB/s   00:00    
flanneld                                                     100%  306   323.4KB/s   00:00    
flanneld                                                     100%   34MB  70.1MB/s   00:00    
mk-docker-opts.sh                                            100% 2139     2.8MB/s   00:00    
flanneld                                                     100%  306   511.9KB/s   00:00    

[root@master cfg]# for N in master master1 container container1; do scp -r /usr/lib/systemd/system/docker.service root@$N:/usr/lib/systemd/system/docker.service; done;

docker.service                                               100% 1779     3.1MB/s   00:00    
docker.service                                               100% 1779     1.6MB/s   00:00    
docker.service                                               100% 1779     1.7MB/s   00:00    
docker.service                                               100% 1779     1.7MB/s   00:00    

[root@master cfg]# for N in master master1 container container1; do scp -r /usr/lib/systemd/system/flanneld.service root@$N:/usr/lib/systemd/system/flanneld.service; done;

flanneld.service                                             100%  461   899.7KB/s   00:00    
flanneld.service                                             100%  461   497.4KB/s   00:00    
flanneld.service                                             100%  461   488.1KB/s   00:00    
flanneld.service                                             100%  461   455.4KB/s   00:00  

6.8 启动服务
[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl stop docker
[root@master ~]# systemctl enable flannel
[root@master ~]# systemctl start flannel
[root@master ~]# systemctl start docker

6.9 查看Flannel服务设置docker0 网桥状态

[root@master ~]# ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1e:bc:d9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.247.10/24 brd 192.168.247.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::197e:b962:5bfe:c1d8/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::646f:92f3:43e8:81/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::f496:f965:da96:6be/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:0b:2d:1f:c2 brd ff:ff:ff:ff:ff:ff
    inet 172.24.97.1/24 brd 172.24.97.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether ba:07:5d:cf:58:a5 brd ff:ff:ff:ff:ff:ff
    inet 172.24.97.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::b807:5dff:fecf:58a5/64 scope link 
       valid_lft forever preferred_lft forever

6.10 验证Flannel服务

[root@master ~]# for N in master master1 container container1; do ssh root@$N cat /run/flannel/subnet.env ; done;
DOCKER_OPT_BIP="--bip=172.24.97.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.24.97.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=172.24.80.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.24.80.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=172.24.23.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.24.23.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=172.24.75.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.24.75.1/24 --ip-masq=false --mtu=1450"

七、 安装Kubernetes

7.1 创建 Kubernetes 需要的证书
7.1.1 生成 Kubernetes 证书请求的JSON请求文件
[root@master ssl]# cat << EOF | tee ca-config.json
{"signing": {
    "default": {
      "expiry": "8760h" },
    "profiles": {
      "server": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ],
        "expiry": "8760h" },
      "client": {
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ],
        "expiry": "8760h"
      }
    } 
   } 
  }
EOF
6.1.2 生成 Kubernetes CA 配置文件和证书
[root@master ssl]# cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
6.1.3 初始化一个Kubernetes CA证书
[root@master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/07/14 07:09:24 [INFO] generating a new CA key and certificate from CSR
2019/07/14 07:09:24 [INFO] generate received request
2019/07/14 07:09:24 [INFO] received CSR
2019/07/14 07:09:24 [INFO] generating key: rsa-2048
2019/07/14 07:09:25 [INFO] encoded CSR
2019/07/14 07:09:25 [INFO] signed certificate with serial number 626006269329241912691725314524571328388312272773

[root@master ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
6.1.4 生成 Kube API Server 配置文件和证书
创建证书配置文件
[root@master ssl]# cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ]
}
EOF
cat << EOF | tee kube-apiserver-server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.247.50",
      "192.168.247.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "API Server"
        }
    ]
}
EOF
6.2 生成 kube-apiserver 证书
[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-apiserver-server-csr.json | cfssljson -bare kube-apiserver-server
2019/07/14 07:26:31 [INFO] generate received request
2019/07/14 07:26:31 [INFO] received CSR
2019/07/14 07:26:31 [INFO] generating key: rsa-2048
2019/07/14 07:26:32 [INFO] encoded CSR
2019/07/14 07:26:32 [INFO] signed certificate with serial number 13792734868306131069003908051843223211580375965
2019/07/14 07:26:32 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@master ssl]# ls
ca-config.json  ca-key.pem                 kube-apiserver-server-csr.json
ca.csr          ca.pem                     kube-apiserver-server-key.pem
ca-csr.json     kube-apiserver-server.csr  kube-apiserver-server.pem
6.3 生成 kubelet client 配置文件和证书
6.3.1 创建证书配置文件
[root@master ssl]# cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ] }
EOF
6.3.2 生成 kubelet client证书
[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssljson -bare kubelet-client
2019/07/14 07:33:40 [INFO] generate received request
2019/07/14 07:33:40 [INFO] received CSR
2019/07/14 07:33:40 [INFO] generating key: rsa-2048
2019/07/14 07:33:40 [INFO] encoded CSR
2019/07/14 07:33:40 [INFO] signed certificate with serial number 362600987090687340725325933964979674502154242369
2019/07/14 07:33:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl]# ls
ca-config.json  ca.pem                          kube-apiserver-server.pem  kubelet-client.pem
ca.csr          kube-apiserver-server.csr       kubelet-client.csr
ca-csr.json     kube-apiserver-server-csr.json  kubelet-client-csr.json
ca-key.pem      kube-apiserver-server-key.pem   kubelet-client-key.pem
6.4 生成 Kube-Proxy 配置文件和证书
6.4.1 创建证书配置文件
cat << EOF | tee kube-proxy-client-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "System",
      "ST": "Beijing"
    }
  ]
}
EOF
6.4.2 生成Kube-Proxy证书
[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json | cfssljson -bare kube-proxy-client
2019/07/14 07:38:58 [INFO] generate received request
2019/07/14 07:38:58 [INFO] received CSR
2019/07/14 07:38:58 [INFO] generating key: rsa-2048
2019/07/14 07:38:59 [INFO] encoded CSR
2019/07/14 07:38:59 [INFO] signed certificate with serial number 144984639933590736795720026051220154202606354735
2019/07/14 07:38:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
7.5 生成 kubectl 管理员配置文件和证书
7.5.1 创建 kubectl 管理员证书配置文件
[root@master ssl]# cat << EOF | tee kubernetes-admin-user.csr.json
{
  "CN": "admin",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Cluster Admins",
      "ST": "Beijing"
    }
  ]
}
EOF
7.5.2 生成 kubectl 管理员证书
[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubernetes-admin-user.csr.json | cfssljson -bare kubernetes-admin-user
2019/07/14 07:42:00 [INFO] generate received request
2019/07/14 07:42:00 [INFO] received CSR
2019/07/14 07:42:00 [INFO] generating key: rsa-2048
2019/07/14 07:42:00 [INFO] encoded CSR
2019/07/14 07:42:00 [INFO] signed certificate with serial number 327674999940182638865996167064222609995103768161
2019/07/14 07:42:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@master ssl]# ls
ca-config.json                  kube-apiserver-server-key.pem  kube-proxy-client-csr.json
ca.csr                          kube-apiserver-server.pem      kube-proxy-client-key.pem
ca-csr.json                     kubelet-client.csr             kube-proxy-client.pem
ca-key.pem                      kubelet-client-csr.json        kubernetes-admin-user.csr
ca.pem                          kubelet-client-key.pem         kubernetes-admin-user.csr.json
kube-apiserver-server.csr       kubelet-client.pem          kubernetes-admin-user-key.pem
kube-apiserver-server-csr.json  kube-proxy-client.csr          kubernetes-admin-user.pem

7.6 将相关证书复制到 Kubernetes Node 节点
[root@master ssl]# for N in master master1 container container1; do scp -r /opt/work/_app/k8s/kubernetes/ssl/*.pem root@$N:/opt/work/_app/k8s/kubernetes/ssl/; done;
7.7 部署Kubernetes Master节点并加入集群

Kubernetes Master 节点运行如下组件:

  • APIServer
      APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
  • Schedule
      schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
  • Controller manager
      如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
  • ETCD
      etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
  • Flannel
      默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录
  • kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。
7.8 下载文件并安装 Kubernetes Server
[root@master ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz
[root@master ~]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@master ~]# cd  kubernetes/server/bin/
[root@master ~]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /opt/work/_app/k8s/kubernetes/bin/
7.9 从 master 复制 kubelet、kubectl、kube-proxy,同时复制到其他节点
[root@master ~]# cd kubernetes/server/bin/
[root@master bin]#for N in master master1 container container1; do scp -r kubelet kubectl kube-proxy root@$N:/opt/work/_app/k8s/kubernetes/bin/; done;
7.10 创建/home/work/_app/k8s/kubernetes/cfg/token-auth-file 文件并保存,内容如下:
[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
964dd23ca4ba3ead080a920c1c63a13b

[root@master ~]# cat /opt/work/_app/k8s/kubernetes/cfg/token-auth-file
964dd23ca4ba3ead080a920c1c63a13b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
7.11 创建 Apiserver 配置文件
[root@master ~]# cat  /opt/work/_app/k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.247.10:2379,https://192.168.247.20:2379,https://192.168.247.30:2379,https://192.168.247.40:2379 \
--bind-address=192.168.247.10 \
--secure-port=6443 \
--advertise-address=192.168.247.10 \
--allow-privileged=true \
--service-cluster-ip-range=172.24.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/work/_app/k8s/kubernetes/cfg/token-auth-file \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem  \
--tls-private-key-file=/opt/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem \
--client-ca-file=/opt/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/work/_app/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/work/_app/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/opt/work/_app/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/opt/work/_app/k8s/etcd/ssl/server-key.pem"
7.12 创建 Apiserver 启动文件
7.12.1 创建 /usr/lib/systemd/system/kube-apiserver.service 文件并保存,内容如下:
[root@master ~]# cat  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/work/_app/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/opt/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
7.12.2 启动 Kube Apiserver 服务
[root@master ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
7.12.3 检查 Apiserver 服务是否运行
[root@master ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2019-07-14 09:41:19 CST; 1min 58s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 48121 (kube-apiserver)
    Tasks: 7
   Memory: 214.0M
   CGroup: /system.slice/kube-apiserver.service
           └─48121 /opt/work/_app/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4...

7月 14 09:42:55 master.example.com kube-apiserver[48121]: I0714 09:42:55.744769   48121 av...h
7月 14 09:42:55 master.example.com kube-apiserver[48121]: I0714 09:42:55.744778   48121 av...o
7月 14 09:42:55 master.example.com kube-apiserver[48121]: I0714 09:42:55.744823   48121 av...s
7月 14 09:42:55 master.example.com kube-apiserver[48121]: I0714 09:42:55.744835   48121 av...o
7.13 部署 Scheduler
7.13.1 创建 /home/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件并保存,内容如下:
[root@master ~]# cat  /opt/work/_app/k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
7.14 创建 Kube-scheduler 系统启动文件
7.14.1 创建 /usr/lib/systemd/system/kube-scheduler.service 文件并保存,内容如下:
[root@master ~]# cat  /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/work/_app/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/opt/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
7.15 启动 Kube-scheduler 服务
[root@master ~]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
7.16 检查 Kube-scheduler 服务是否运行
[root@master ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2019-07-14 09:47:36 CST; 1min 7s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 48630 (kube-scheduler)
    Tasks: 7
   Memory: 8.0M
   CGroup: /system.slice/kube-scheduler.service
           └─48630 /opt/work/_app/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4...

7月 14 09:47:37 master.example.com kube-scheduler[48630]: I0714 09:47:37.531217   48630 sh...d
7月 14 09:47:37 master.example.com kube-scheduler[48630]: I0714 09:47:37.631737   48630 sh...d
7月 14 09:47:38 master.example.com kube-scheduler[48630]: I0714 09:47:38.153387   48630 le...r
7月 14 09:47:38 master.example.com kube-scheduler[48630]: I0714 09:47:38.254698   48630 sh...d
Hint: Some lines were ellipsized, use -l to show in full.
7.17 部署Kube-Controller-Manager 组件
7.17.1 创建 kube-controller-manager 配置文件
#创建 /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件并保存,内容如下:
[root@master ~]# cat  /opt/work/_app/k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=172.24.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/work/_app/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/work/_app/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/work/_app/k8s/kubernetes/ssl/ca-key.pem"
7.17.2 创建 kube-controller-manager 系统启动文件
#创建 /usr/lib/systemd/system/kube-controller-manager.service 文件并保存,内容如下
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/work/_app/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
7.17.3 启动kube-controller-manager服务
[root@master ~]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
7.17.4 检查 kube-controller-manager 服务是否运行
[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2019-07-14 09:57:23 CST; 1min 5s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 49393 (kube-controller)
    Tasks: 6
   Memory: 29.2M
   CGroup: /system.slice/kube-controller-manager.service
           └─49393 /opt/work/_app/k8s/kubernetes/bin/kube-controller-manager --logtostderr=t...
7月 14 09:58:27 master.example.com kube-controller-manager[49393]: I0714 09:58:27.420222   ...
Hint: Some lines were ellipsized, use -l to show in full.
7.17.5 验证 API Server 服务
将 kubectl 加入到$PATH变量中
[root@master ~]# echo "PATH=/opt/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile
[root@master ~]# source /etc/profile
查看节点状态
[root@master ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-3               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
7.18 部署Kubelet
7.18.1 创建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件
#创建 /home/work/_app/k8s/kubernetes/cfg/env.sh 文件并保存,内容如下:
[root@master ~]# cd /opt/work/_app/k8s/kubernetes/cfg/
[root@master cfg]# cat env.sh  
#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=964dd23ca4ba3ead080a920c1c63a13b
KUBE_APISERVER="https://192.168.247.10:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#---------------------------------------------------------------------
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem \
  --client-key=/opt/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
7.18.2 执行脚本
/opt/work/_app/k8s/kubernetes/cfg
[root@master cfg]# sh env.sh 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master cfg]# ls
bootstrap.kubeconfig  flanneld        kube-controller-manager  kube-scheduler
env.sh                kube-apiserver    kube-proxy.kubeconfig    token-auth-file
7.18.3 将 bootstrap.kubeconfig、kube-proxy.kubeconfig 复制到其他节点
[root@master cfg]# for N in master master1 container container1; do scp -r kube-proxy.kubeconfig bootstrap.kubeconfig root@$N:/opt/work/_app/k8s/kubernetes/cfg/; done;
kube-proxy.kubeconfig                                        100% 6298     9.1MB/s   00:00    
bootstrap.kubeconfig                                         100% 2180     4.8MB/s   00:00    
kube-proxy.kubeconfig                                        100% 6298     5.5MB/s   00:00    
bootstrap.kubeconfig                                         100% 2180     1.9MB/s   00:00    
kube-proxy.kubeconfig                                        100% 6298     4.9MB/s   00:00    
bootstrap.kubeconfig                                         100% 2180     2.3MB/s   00:00    
kube-proxy.kubeconfig                                        100% 6298     4.6MB/s   00:00    
bootstrap.kubeconfig                                         100% 2180     1.5MB/s   00:00
7.18.4 创建 kubelet 配置文件
#创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:
[root@master cfg]# cat  /opt/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.247.10
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["172.24.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
7.18.5 创建 /opt/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:
[root@master cfg]# cat /opt/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.247.10 \
--kubeconfig=/opt/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/work/_app/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet启动时,如果通过 --kubeconfig 指定的文件不存在,则通过 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用于从API服务器请求客户端证书。 在通过 kubelet 批准证书请求时,引用生成的密钥和证书将放在 --cert-dir 目录中

7.18.6 将kubelet-bootstrap用户绑定到系统集群角色
[root@master cfg]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
7.18.7 创建 kubelet 系统启动文件
#创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:
[root@master cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/opt/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
7.18.8 启动 kubelet 服务
[root@master cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
7.18.9 查看 kubelet 服务运行状态
[root@master cfg]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2019-07-14 10:26:10 CST; 50s ago
 Main PID: 51654 (kubelet)
    Tasks: 7
   Memory: 12.9M
   CGroup: /system.slice/kubelet.service
           └─51654 /opt/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --host...

7月 14 10:26:11 master.example.com kubelet[51654]: I0714 10:26:11.281052   51654 mount_li...md
7月 14 10:26:11 master.example.com kubelet[51654]: I0714 10:26:11.358816   51654 reflecto...46
7月 14 10:26:11 master.example.com kubelet[51654]: I0714 10:26:11.358844   51654 reflecto...46
Hint: Some lines were ellipsized, use -l to show in full.
7.19 批准 Master 加入集群
CSR 可以在内置批准流程之外做手动批准加入集群。管理员也可以使用 kubectl 手动批准证书请求。管理员可以使用 kubectl get csr 列出 CSR 请求, 并使用 kubectl describe csr <name> 列出详细描述。管理员也可以使用 kubectl certificate approve <name> 或 kubectl certificate deny <name> 工> 具批准或拒绝 CSR 请求
7.19.1 查看 CSR 列表
[root@master cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-7l994WGnMpz-zk9CbRV56NiUhVQj1ybwy48SdLnSvMU   2m25s   kubelet-bootstrap   Pending
7.19.2 批准加入集群
[root@master cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-7l994WGnMpz-zk9CbRV56NiUhVQj1ybwy48SdLnSvMU   3m47s   kubelet-bootstrap   Pending
[root@master cfg]# kubectl certificate approve node-csr-7l994WGnMpz-zk9CbRV56NiUhVQj1ybwy48SdLnSvMU
certificatesigningrequest.certificates.k8s.io/node-csr-7l994WGnMpz-zk9CbRV56NiUhVQj1ybwy48SdLnSvMU approved
7.19.3 验证 Master 是否加入集群
[root@master cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-7l994WGnMpz-zk9CbRV56NiUhVQj1ybwy48SdLnSvMU   4m58s   kubelet-bootstrap   Approved,Issued
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 199,711评论 5 468
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 83,932评论 2 376
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 146,770评论 0 330
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 53,799评论 1 271
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,697评论 5 359
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,069评论 1 276
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,535评论 3 390
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,200评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,353评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,290评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,331评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,020评论 3 315
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,610评论 3 303
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,694评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,927评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,330评论 2 346
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 41,904评论 2 341