[toc]
2 Kubernetes介绍
2.1 kubernets简介
......
2.2 kubernetes主要功能
- 数据卷:Pod中容器质检共享数据,使用数据卷;
- 应用程序健康检查:容器服务可能进程堵塞无法处理请求,可以设置监控检查策略保证应用健壮性;
- 复制应用程序实例:控制器维护着Pod副本数量,保证一个Pod或一组同类Pod数量始终可用;
- 弹性伸缩:根据设定的指标(CPU利用率)自动缩放Pod副本数;
- 服务发现:使用环境变量或DNS服务插件保证容器中程序发现Pod入口访问地址;
- 负载均衡:一组Pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器;在集群内部其他Pod可通过这个ClusterIP访问应用;
- 滚动更新:更新服务部终端,一次更新一个Pod,而不是同时删除整个服务;
- 服务编排:通欧文件描述部署服务,使得应用程序部署变得更高效;
- 资源监控:Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示;
- 提供认证和授权:支持角色访问控制(RBAC)认证授权等策略;
2.3 基本对象概念
基本对象
Pod
Pod是最小部署单元,一个Pod由一个或多个容器组成,Pod中容器共享存储和网络,在同一个Docker主机上运行;
Service
Service一个应用服务抽象,定义了Pod逻辑集合和访问这个Pod集合的策略;
Service代理Pod集合对外表现是为一个访问入口,分配一个集群IP地址,来自这个IP的请求将负载均衡转发后端Pod中的容器;
Service通过Lable Selecter选择一组Pod提供服务;
Volume
数据卷,共享Pod中容器使用的数据;
Namespace
命名空间将对象逻辑上分配到不同Namespace,可以是不同的项目、用户等区分管理,病设定控制策略,从而实现多租户;
命名空间也成为虚拟集群,一个Namespcae相当于一个集群;
Lable
标签用于区分对象(比如Pod、Service),以 键/值对 存储;每个对象可以有多个标签,通过标签关联对象;
基于基本对象更高层次的抽象
ReplicaSet
Replication Controller的下一代;
确保任何给定时间指定的Pod副本数量,并提供声明式更新等功能;
RC与RS唯一区别就是lable selector支持不同,RS支持新的基于集合的标签,RC仅支持基于等式的标签;
Deployment
Deployment是一个更高层次的API对象,它管理ReplicaSets和Pod,并提供声明式更新等功能;
官方建议使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,这就意味着可能永远不需要直接操作ReplicaSet对象;
StatefulSet
StatefulSet适合持久性的应用程序,有唯一的网络标识符(IP),持久存储,有序的部署、扩展、删除和滚动更新;
DaemonSet
DaemonSet确保所有(或一些)节点运行同一个Pod;
当节点加入Kubernetes集群中,Pod会被调度到该节点上运行,当节点从集群中移除时,DaemonSet的Pod会被删除;
删除DaemonSet会清理它所有创建的Pod;
Job
一次性人物,运行完成后Pod销毁,不再重新启动新容器;
还可以任务定时运行;
2.4 系统架构及组件功能
2.4.1 系统架构
2.4.2 组件功能
Master组件
kube-apiserver
Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和箭筒操作都交给APIServer处理后再提交给Etcd存储;
kuber-cotroller-manager
处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这个控制器的;
kube-scheduler
根据调度算法为新创建的Pod选择一个Node节点;
Node节点
kubelet
kubelet是Master在Node节点上的agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、
下载secret、获取容器和节点状态等工作;
kubelet将每个pod转换成一组容器;
kube-proxy
在node节点上实现pod网络代理,维护网络规则和四层负载均衡工作;
docker或rocket/rkt
运行容器
第三方服务
etcd
分部是键值存储系统;
用于保持集群状态,比如Pod、Service等对象信息;
3 集群部署
3.1 环境规划
软件 | 版本 |
---|---|
Linux操作系统 | CentOS7.4.1708 |
Docker | 19.03.9-ce |
Etcd | 3.3.18 |
Flanneld | 0.11.0 |
Kubernetes | 1.17.3 |
角色 | IP | 组件 | 推荐配置 |
---|---|---|---|
master | --- | kube-apiserver kube-controller-manager kube-scheduler etcd flanneld |
CPU 2核+ 2G内存+ 无swap |
node01 | --- | kubelet kube-proxy docker flannel etcd flanneld |
CPU 2核+ 2G内存+ 无swap |
node02 | --- | kubelet kube-proxy docker flannel etcd flanneld |
CPU 2核+ 2G内存+ 无swap |
3.2 安装Docker
wget https://mirrors.aliyun.com/repo/Centos-7.repo -P /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
yum -y install docker-ce
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://rfj1yucr.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl enable docker
在flannel装好后,再启动docker
3.3 自签TLS证书
组件 | 使用的证书 |
---|---|
etcd | ca.pem, server.pem, server-key.pem |
flannel | ca.pem, server.pem, server-key.pem |
kube-apiserver | ca.pem, server.pem, server-key.pem |
kubelet | ca.pem, ca-key.pem |
kuber-proxy | ca.pem, kube-proxy.pem, kube-proxy-key.pem |
kubectl | ca.pem, ca-key.pem |
安装证书生成工具
wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget http://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod 777 cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv -v cfssl_linux-amd64 /usr/local/bin/cfssl
mv -v cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv -v cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
制作证书
执行下面的脚本,生成需要的全部证书
#!/bin/bash
# certificate.sh
mkdir ssl && cd ssl
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
# --------------------------------------------------
# 生成ca.pem ca-key.pem
# --------------------------------------------------
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
# --------------------------------------------------
# 生成 server.pem server-key.pem
# 配置说明:
# 集群节点IP: "192.168.30.130", "192.168.30.131", "192.168.30.132"
# kubernetes服务ip: "10.1.0.1", 一般是 kue-apiserver 指定的 service-cluster-ip-range 网段的第一个IP
# 集群域名: "kubernetes*"
# --------------------------------------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.30.130",
"192.168.30.131",
"192.168.30.132",
"10.1.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
# --------------------------------------------------
# 生成admin.pem admin-key.pem
# 用于集群管路员访问管理集群
# --------------------------------------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "system:master",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
# --------------------------------------------------
# 生成kube-proxy.pem kube-proxy-key-key.pem
# 用于集群管路员访问管理集群
# --------------------------------------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 删除无用的 .json .csr 文件
ls | grep -v pem | xargs -i rm -v {}
按照表格把证书拷贝到对应服务器的/opt/kubernetes/ssl中;
生成完成:
[root@master ssl]# ll
总用量 32
-rw------- 1 root root 1679 2月 16 10:27 admin-key.pem
-rw-r--r-- 1 root root 1399 2月 16 10:27 admin.pem
-rw------- 1 root root 1675 2月 16 09:45 ca-key.pem
-rw-r--r-- 1 root root 1359 2月 16 09:45 ca.pem
-rw------- 1 root root 1679 2月 16 10:29 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 2月 16 10:29 kube-proxy.pem
-rw------- 1 root root 1679 2月 16 10:20 server-key.pem
-rw-r--r-- 1 root root 1627 2月 16 10:20 server.pem
3.4 部署Etcd集群
etcd下载地址: https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
部署etcd:
在Master节点:
mkdir -pv /opt/kubernetes/{bin,cfg,ssl}
tar -xf etcd-v3.3.18-linux-amd64.tar.gz
cd etcd-v3.3.18-linux-amd64
cp -v etcd etcdctl /opt/kubernetes/bin/
#!/bin/bash
# etcd.sh
ETCD_NAME=${1:-"etcd01"}
ETCD_IP=${2:-"192.168.30.130"}
ETCD_CLUSTER=${3:-"etcd01=https://192.168.30.130:2380,etcd02=https://192.168.30.131:2380,etcd03=https://192.168.30.132:2380"}
WORK_DIR=/opt/kubernetes
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \\
--name=\${ETCD_NAME} \\
--data-dir=\${ETCD_DATA_DIR} \\
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \\
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \\
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \\
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \\
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \\
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \\
--initial-cluster-state=\${ETCD_INITIAL_CLUSTER_STATE} \\
--cert-file=${WORK_DIR}/ssl/server.pem \\
--key-file=${WORK_DIR}/ssl/server-key.pem \\
--peer-cert-file=${WORK_DIR}/ssl/server.pem \\
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \\
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \\
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
查看集群状态:
/opt/kubernetes/bin/etcdctl \
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--endpoints="https://192.168.30.130:2379,https://192.168.30.131:2379,https://192.168.30.132:2379" \
cluster-health
member 6ef4890a57d074a is healthy: got healthy result from https://192.168.30.132:2379
member f6c3c8a6e5120dc is healthy: got healthy result from https://192.168.30.131:2379
member 474da0838d5726c7 is healthy: got healthy result from https://192.168.30.130:2379
3.5 部署Flanne网络
3.5.1 Flannel网络原理
Overlay Network:
覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来;
VXLAN:
将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址;
Flannel:
是Overlay网络的一种,也是将数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式;
多主机容器网络通信其他主流方案:
隧道方案(Weave、OpenvSwitch),路由方案(Calico)等;
VXLAN网络
Flannel网络
3.5.2 部署Flannel网络
写入一个子网段到etcd,供flanneld使用:(只有第一次执行时需要做这一步)
/opt/kubernetes/bin/etcdctl \
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--endpoints="https://192.168.30.130:2379,https://192.168.30.131:2379,https://192.168.30.132:2379" \
set /coreos.com/network/config '{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
安装flannel:
tar -xf flannel-v0.11.0-linux-amd64.tar.gz
mv -v flanneld mk-docker-opts.sh /opt/kubernetes/bin/
#!/bin/bash
# flanneld.sh
ETCD_ENDPOINTS=${1:-"https://192.168.30.130:2379,https://192.168.30.131:2379,https://192.168.30.132:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \\
-etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
-etcd-certfile=/opt/kubernetes/ssl/server.pem \\
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
# 配置docker使用flannel子网:
cat <<EOF >/usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker
验证配置:
[root@node01 ~]# ip a show
......
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:72:53:18:e4 brd ff:ff:ff:ff:ff:ff
inet 172.17.59.1/24 brd 172.17.59.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether d2:52:a7:81:44:f0 brd ff:ff:ff:ff:ff:ff
inet 172.17.59.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
......
docker0已经在使用和flannel.1相同的网络;
3.6 获取Kubernetes安装包
Kubernetes 安装包:https://dl.k8s.io/v1.17.3/kubernetes-server-linux-amd64.tar.gz
3.7 部署Master节点组件
tar -xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp -v kubectl kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/
echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
source /etc/profile
3.7.1 创建Node节点kubeconfig文件
#!/bin/bash
# kubeconfig.sh
# --------------------------------------------------------
# 1. 创建TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat <<EOF > /opt/kubernetes/cfg/token.csv
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
# --------------------------------------------------------
# 2. 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://192.168.30.130:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
# --------------------------------------------------------
# 3. 创建kube-proxy kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
scp /opt/kubernetes/cfg/bootstrap.kubeconfig /opt/kubernetes/cfg/kube-proxy.kubeconfig node01:/opt/kubernetes/cfg/
scp /opt/kubernetes/cfg/bootstrap.kubeconfig /opt/kubernetes/cfg/kube-proxy.kubeconfig node02:/opt/kubernetes/cfg/
3.7.2 安装apiserver
#!/bin/bash
# apiservice.sh
MASTER_ADDRESS=${1:-"192.168.30.130"}
ETCD_SERVERS=${2:-"https://192.168.30.130:2379,https://192.168.30.131:2379,https://192.168.30.132:2379"}
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--insecure-bind-address=127.0.0.1 \\
--bind-address=${MASTER_ADDRESS} \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
3.7.3 安装controller-manager
#!/bin/bash
# controller-manager.sh
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.10.10.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
3.7.4 安装scheduler
#!/bin/bash
# scheduler.sh
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
3.7.5 查看集群节点状态
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
3.8 部署Node节点组件
tar -xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp -v kubelet kubectl kube-proxy /opt/kubernetes/bin/
echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
source /etc/profile
3.8.1 安装kubelet
[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
#!/bin/bash
# kubelet.sh
NODE_ADDRESS=${1:-"192.168.30.131"}
DNS_SERVER_IP=${2:-"10.10.10.2"}
cat <<EOF >/opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
cat <<EOF >/opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
3.8.2 安装proxy
#!/bin/bash
# proxy.sh
NODE_ADDRESS=${1:-"192.168.30.131"}
cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.10.10.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
3.8.3 允许Node节点证书
查看节点请求证书
[root@master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-y6qQQ6sdNu3kEyOA7R0f2H3WTT-wC3Wnk1ZGhlbu2Kw 2m38s kubelet-bootstrap Pending
允许节点证书
[root@master ~]# kubectl certificate approve node-csr-y6qQQ6sdNu3kEyOA7R0f2H3WTT-wC3Wnk1ZGhlbu2Kw
certificatesigningrequest.certificates.k8s.io/node-csr-y6qQQ6sdNu3kEyOA7R0f2H3WTT-wC3Wnk1ZGhlbu2Kw approved
[root@master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-y6qQQ6sdNu3kEyOA7R0f2H3WTT-wC3Wnk1ZGhlbu2Kw 5m3s kubelet-bootstrap Approved,Issued
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.30.131 Ready <none> 31s v1.17.3
3.9 安装coredns
coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: coredns/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.10.10.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
[root@master UI]# kubectl create -f coredns.yaml
3.10 启动一个测试示例
[root@master ~]# kubectl run nginx --image=nginx --replicas=3
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version.
Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
[root@master ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-6db489d4b7-55d5x 0/1 ContainerCreating 0 111s
pod/nginx-6db489d4b7-jt4s4 0/1 ContainerCreating 0 111s
pod/nginx-6db489d4b7-zcklh 0/1 ContainerCreating 0 111s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.10.1 <none> 443/TCP 30m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 0/3 3 0 111s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-6db489d4b7 3 3 0 111s
[root@master ~]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-6db489d4b7-55d5x 0/1 ContainerCreating 0 2m58s <none> 192.168.30.132 <none> <none>
pod/nginx-6db489d4b7-jt4s4 0/1 ContainerCreating 0 2m58s <none> 192.168.30.131 <none> <none>
pod/nginx-6db489d4b7-zcklh 0/1 ContainerCreating 0 2m58s <none> 192.168.30.131 <none> <none>
[root@master ~]# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.10.1 <none> 443/TCP 35m
nginx NodePort 10.10.10.164 <none> 88:31114/TCP 2m2s
# 从Node节点访问,如果想从Master节点访问,需要在Master部署flanneld
[root@node01 ~]# curl http://10.10.10.164:88
# 从浏览器访问 http://Node节点网卡ip:31114 显示Nginx主页,验证未通过
3.11 部署Web UI(Dashboard)
dashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
dashboard-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.1
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 9090
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
type: NodePort
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
kubectl create -f dashboard-rbac.yaml
kubectl create -f dashboard-controller.yaml
kubectl create -f dashboard-service.yaml