本文主要采取二进制方式部署k8s集群,二进制部署可以帮助我们了解其组件间的调用关系,也利于我们后期维护
主机环境
系统: centos7.5 3台
内存: 4G
磁盘:40G
cpu 2CPU
软件版本
k8s 1.18
docker 19-ce
主机规划
k8s-masetr 172.25.120.17 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
node-1 172.25.120.18 kubelet,kube-proxy,docker etcd
node-2 172.25.120.19 kubelet,kube-proxy,docker etcd
1.主机环境初始化
在3个节点上操作
#关闭防火墙systemctl stop firewalld ; systemctl disable firewalld#关闭selinuxsetenforce 0 ;sed -i 's/enforcing/disabled/' /etc/selinux/config#关闭swap分区swapoff -a ; sed -ri 's/.*swap.*/#&/' /etc/fstab#添加hostscat >> /etc/hosts << EOF172.25.120.17 master k8s-master172.25.120.19 node1 k8s-node1172.16.210.55 node2 k8s-node2EOF#添加防火墙转发cat > /etc/sysctl.d/k8s.conf << EOFnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFmodprobe br_netfiltersysctl --system ##生效#时间同步yum install -y ntpdate ##安装时间同步工具ntpdate time.windows.com #同步windwos时间服务器
2.部署etcd集群
Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储
2.1准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用
在master上操作:
##获取证书管理工具wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64##添加看执行权限并放进可执行目录chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2生成Etcd证书
创建证书目录
mkdir -p ~/TLS/{etcd,k8s} cd ~/TLS/etcd ##进入证书目录
自签CA:
cat > ca-config.json << EOF{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}EOFcat > ca-csr.json << EOF{ "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ]}EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -ls *pem ##可以看到当前目录生成了两个证书文件ca-key.pem ca.pem
2. 使用自签CA签发Etcd HTTPS证书
创建证书申请文件:
cat > server-csr.json << EOF{ "CN": "etcd", "hosts": [ "172.16.210.53", "172.16.210.54", "172.16.210.55" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ]}EOF
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare serverls server*pem ##可以看到生成了两个sever证书server-key.pem server.pem
2.3下载etcd二进制文件
文件地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
以下操作在master上操作,待会将master生成的所有文件拷贝到node1和node2:
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ##获取二进制文件
2.4部署etcd集群
1.创建工作目录并解压二进制文件
mkdir /opt/etcd/{bin,cfg,ssl} -ptar zxvf etcd-v3.4.9-linux-amd64.tar.gzmv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2.创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF#[Member]ETCD_NAME="etcd-1"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://172.16.210.53:2380"ETCD_LISTEN_CLIENT_URLS="https://172.16.210.53:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.210.53:2380"ETCD_ADVERTISE_CLIENT_URLS="https://172.16.210.53:2379"ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.210.53:2380,etcd-2=https://172.16.210.54:2380,etcd-3=https://172.16.210.55:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"EOF
参数详解:
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3.配置systemd管理etcd
cat>/usr/lib/systemd/system/etcd.service<<EOF[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=/opt/etcd/cfg/etcd.confExecStart=/opt/etcd/bin/etcd \--cert-file=/opt/etcd/ssl/server.pem \--key-file=/opt/etcd/ssl/server-key.pem \--peer-cert-file=/opt/etcd/ssl/server.pem \--peer-key-file=/opt/etcd/ssl/server-key.pem \--trusted-ca-file=/opt/etcd/ssl/ca.pem \--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \--logger=zapRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF
4. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp~/TLS/etcd/ca*pem~/TLS/etcd/server*pem/opt/etcd/ssl/
5.将master生成的所有文件拷贝到其他节点
scp-r/opt/etcd/172.16.210.54:/opt/scp/usr/lib/systemd/system/etcd.service172.16.210.54:/usr/lib/systemd/system/scp-r/opt/etcd/172.16.210.55:/opt/scp/usr/lib/systemd/system/etcd.service172.16.210.55:/usr/lib/systemd/system/
6.在node1和node2分别修改etcd.conf配置文件中的节点名称和当前服务器IP
sed -i '4,8s/172.16.210.53/172.16.210.54/' /opt/etcd/cfg/etcd.conf ; sed -i '2s/etcd-1/etcd-2/' /opt/etcd/cfg/etcd.conf ###在node1执行sed -i '4,8s/172.16.210.53/172.16.210.55/' /opt/etcd/cfg/etcd.conf ; sed -i '2s/etcd-1/etcd-3/' /opt/etcd/cfg/etcd.conf ###在node2执行
7.启动3个节点的etcd并加入开机自启
在三各节点操作
systemctl daemon-reloadsystemctl start etcdsystemctl enable etcd
8.查看etcd集群状态
[root@master~]#ETCDCTL_API=3/opt/etcd/bin/etcdctl--cacert=/opt/etcd/ssl/ca.pem--cert=/opt/etcd/ssl/server.pem--key=/opt/etcd/ssl/server-key.pem--endpoints="https://172.16.210.53:2379,https://172.16.210.54:2379,https://172.16.210.55:2379"endpoint healthhttps://172.16.210.54:2379ishealthy:successfully committed proposal:took=14.194738mshttps://172.16.210.55:2379ishealthy:successfully committed proposal:took=14.97292mshttps://172.16.210.53:2379ishealthy:successfully committed proposal:took=14.847968ms
出现successfully,表面etcd部署成功,如果有异常情况可以使用systemctl stautus etcd -l进一步查看报错信息
3.安装Docker
可以使用yum安装,这次我们采用二进制的方式
以下所有操作在所有节点
3.1获取docker安装包
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
3.2解压docker二进制包
tar zxvf docker-19.03.9.tgzmv docker/* /usr/bin
3.3配置systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyExecStart=/usr/bin/dockerdExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.targetEOF
3.4配置docker加速器
mkdir/etc/dockercat>/etc/docker/daemon.json<<EOF{"registry-mirrors":["https://jo6348gu.mirror.aliyuncs.com"]}EOF
3.5启动docker并加入开机自启
systemctl daemon-reload systemctl start dockersystemctl enable docker
4.部署master
以下操作在master上
4.1 生成kube-apiserver证书
1. 自签证书颁发机构(CA)
cd TLS/k8scat > ca-config.json << EOF{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}EOFcat > ca-csr.json << EOF{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -ls *pemca-key.pem ca.pem ##可以看到生成了两个证书
2. 使用自签CA签发kube-apiserver HTTPS证书
创建证书申请文件:
cat > server-csr.json << EOF{ "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "172.16.210.53", "172.16.210.54", "172.16.210.55", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ]}EOF
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare serverls server*pemserver-key.pem server.pem ##可以看到又生成两个server证书
4.2 从Github下载k8s二进制文件并解压
1.获取二进制包
wget https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz
2.解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} tar zxvf kubernetes-server-linux-amd64.tar.gzcd kubernetes/server/bincp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bincp kubectl /usr/bin/
4.3部署kube-apiserver
1. 创建配置文件
cat>/opt/kubernetes/cfg/kube-apiserver.conf<<EOFKUBE_APISERVER_OPTS="--logtostderr=false\\--v=4\\--log-dir=/opt/kubernetes/logs \\--etcd-servers=https://172.16.210.53:2379,https://172.16.210.54:2379,https://172.16.210.55:2379\\--bind-address=172.16.210.53\\--secure-port=6443\\--advertise-address=172.16.210.53\\--allow-privileged=true\\--service-cluster-ip-range=10.0.0.0/24\\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction\\--authorization-mode=RBAC,Node\\--enable-bootstrap-token-auth=true\\--token-auth-file=/opt/kubernetes/cfg/token.csv \\--service-node-port-range=30000-32767\\--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\--tls-cert-file=/opt/kubernetes/ssl/server.pem \\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\--client-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile=/opt/etcd/ssl/ca.pem \\--etcd-certfile=/opt/etcd/ssl/server.pem \\--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\--audit-log-maxage=30\\--audit-log-maxbackup=3\\--audit-log-maxsize=100\\--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"EOF
参数详解:
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志
2. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp~/TLS/k8s/ca*pem~/TLS/k8s/server*pem/opt/kubernetes/ssl/
3. 启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping 工作流程:
image.png
创建上述配置文件中token文件:
cat>/opt/kubernetes/cfg/token.csv<<EOFb1dc586d69159ff4e3ef7efa9db60e48,10001,"system:node-bootstrapper"EOF
格式:token,用户名,UID,用户组
token也可自行生成替换:
head-c16/dev/urandom|od-An-t x|tr-d' '
4. systemd管理apiserver
cat>/usr/lib/systemd/system/kube-apiserver.service<<EOF[Unit]Description=KubernetesAPIServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.confExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
5. 启动并设置开机启动
systemctl daemon-reloadsystemctl start kube-apiserversystemctl enable kube-apiserver
6. 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap
4.4 部署kube-controller-manager
创建配置文件
cat>/opt/kubernetes/cfg/kube-controller-manager.conf<<EOFKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false\\--v=4\\--log-dir=/opt/kubernetes/logs \\--leader-elect=true\\--master=127.0.0.1:8080\\--bind-address=127.0.0.1\\--allocate-node-cidrs=true\\--cluster-cidr=10.244.0.0/16\\--service-cluster-ip-range=10.0.0.0/24\\--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\--root-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\--experimental-cluster-signing-duration=87600h0m0s"EOF
–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
2. systemd管理controller-manager
cat>/usr/lib/systemd/system/kube-controller-manager.service<<EOF[Unit]Description=KubernetesControllerManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.confExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
3.启动并设置开机启动
systemctl daemon-reloadsystemctl start kube-controller-managersystemctl enable kube-controller-manager
4.6 部署kube-scheduler
1. 创建配置文件
cat>/opt/kubernetes/cfg/kube-scheduler.conf<<EOFKUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"EOF
–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
2. systemd管理scheduler
cat>/usr/lib/systemd/system/kube-scheduler.service<<EOF[Unit]Description=KubernetesSchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.confExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
3.启动并设置开机启动
systemctl daemon-reloadsystemctl start kube-schedulersystemctl enable kube-scheduler
4. 查看集群状态
所有组件都已经启动成功,通过kubectl get cs命令查看当前集群组件状态:
kubectlgetcsNAMESTATUSMESSAGEERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0Healthy{"health":"true"}etcd-1Healthy{"health":"true"}etcd-2Healthy{"health":"true"}
五、部署Worker Node
下面还是在master节点上操作,即同时作为Worker Node
5.1 拷贝二进制文件
cd ~/kubernetes/server/bincp kubelet kube-proxy /opt/kubernetes/bin
5.2 部署kubelet
1. 创建配置文件
cat>/opt/kubernetes/cfg/kubelet.conf<<EOFKUBELET_OPTS="--logtostderr=false\\--v=4\\--log-dir=/opt/kubernetes/logs \\--hostname-override=k8s-master \\--network-plugin=cni \\--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\--config=/opt/kubernetes/cfg/kubelet-config.yml \\--cert-dir=/opt/kubernetes/ssl \\--pod-infra-container-image=lizhenliang/pause-amd64:3.0"EOF
参数详解:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
2. 创建配置参数yaml文件
cat>/opt/kubernetes/cfg/kubelet-config.yml<<EOFkind:KubeletConfigurationapiVersion:kubelet.config.k8s.io/v1beta1address:0.0.0.0port:10250readOnlyPort:10255cgroupDriver:cgroupfsclusterDNS:-10.0.0.2clusterDomain:cluster.local failSwapOn:falseauthentication:anonymous:enabled:falsewebhook:cacheTTL:2m0s enabled:truex509:clientCAFile:/opt/kubernetes/ssl/ca.pem authorization:mode:Webhook webhook:cacheAuthorizedTTL:5m0s cacheUnauthorizedTTL:30sevictionHard:imagefs.available:15%memory.available:100Mi nodefs.available:10%nodefs.inodesFree:5%maxOpenFiles:1000000maxPods:110EOF
3. 生成bootstrap.kubeconfig文件
##设置环境变量KUBE_APISERVER="https://172.16.210.53:6443" # apiserver IP:PORTTOKEN="b1dc586d69159ff4e3ef7efa9db60e48" # 与token.csv里保持一致# 生成 kubelet bootstrap kubeconfig 配置文件kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfigkubectl config set-credentials "kubelet-bootstrap" \ --token=${TOKEN} \ --kubeconfig=bootstrap.kubeconfigkubectl config set-context default \ --cluster=kubernetes \ --user="kubelet-bootstrap" \ --kubeconfig=bootstrap.kubeconfigkubectl config use-context default --kubeconfig=bootstrap.kubeconfig
拷贝到配置文件路径:
cp bootstrap.kubeconfig /opt/kubernetes/cfg
4. systemd管理kubelet
cat>/usr/lib/systemd/system/kubelet.service<<EOF[Unit]Description=KubernetesKubeletAfter=docker.service[Service]EnvironmentFile=/opt/kubernetes/cfg/kubelet.confExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF
5. 启动并设置开机启动
systemctl daemon-reloadsystemctl start kubeletsystemctl enable kubelet
5.3 批准kubelet证书申请并加入集群
# 查看kubelet证书请求kubectl get csrNAME AGE SIGNERNAME REQUESTOR CONDITIONnode-csr-d-UyqVObT-tnWdXd881Ppc3oNVr6xkCBXV7VRlWyhf8 30s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending# 批准申请kubectl certificate approve node-csr-d-UyqVObT-tnWdXd881Ppc3oNVr6xkCBXV7VRlWyhf8# 查看节点kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master NotReady 15s v1.18.3 ##由于没有部署网络插件,所以节点是NotReady
5.4 部署kube-proxy
1. 创建配置文件
cat>/opt/kubernetes/cfg/kube-proxy.conf<<EOFKUBE_PROXY_OPTS="--logtostderr=false\\--v=4\\--log-dir=/opt/kubernetes/logs \\--config=/opt/kubernetes/cfg/kube-proxy-config.yml"EOF
2. 配置参数文件
cat>/opt/kubernetes/cfg/kube-proxy-config.yml<<EOFkind:KubeProxyConfigurationapiVersion:kubeproxy.config.k8s.io/v1alpha1bindAddress:0.0.0.0metricsBindAddress:0.0.0.0:10249clientConnection:kubeconfig:/opt/kubernetes/cfg/kube-proxy.kubeconfighostnameOverride:k8s-masterclusterCIDR:10.0.0.0/24EOF
3. 生成kube-proxy.kubeconfig文件
生成kube-proxy证书:
# 切换工作目录cd ~/TLS/k8s# 创建证书请求文件cat > kube-proxy-csr.json << EOF{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ]}EOF# 生成证书cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyls kube-proxy*pemkube-proxy-key.pem kube-proxy.pem ##可以看到生成了两个kube-proxy的证书文件
生成kubeconfig文件
#创建环境变量KUBE_APISERVER="https://172.16.210.53:6443"kubectl configset-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true\--server=${KUBE_APISERVER}\--kubeconfig=kube-proxy.kubeconfigkubectl configset-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true\--kubeconfig=kube-proxy.kubeconfigkubectl configset-contextdefault\--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-contextdefault--kubeconfig=kube-proxy.kubeconfig
拷贝到配置文件指定路径:
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
4. systemd管理kube-proxy
cat>/usr/lib/systemd/system/kube-proxy.service<<EOF[Unit]Description=KubernetesProxyAfter=network.target[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.confExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF
5. 启动并设置开机启动
systemctl daemon-reloadsystemctl start kube-proxysystemctl enable kube-proxy
5.5 部署CNI网络
1.先下载CNI二进制文件:
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
2.解压二进制包并移动到默认工作目录
mkdir -p /opt/cni/bintar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
5.5部署CNI网络
获取flanel网络yaml文件,并修改镜像地址
echo"151.101.76.133 raw.githubusercontent.com">>/etc/hostswget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlsed-i-r"s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g"kube-flannel.yml##默认镜像地址无法访问,修改为docker hub镜像仓库。
开始部署CNI网络:
kubectl apply-f kube-flannel.yml##查看pod是否运行成功kubectlgetpods-n kube-systemNAMEREADYSTATUSRESTARTSAGEkube-flannel-ds-amd64-p9tdp1/1Running0##运行成功后,再查看节点是否运行正常kubectlgetnodesNAMESTATUSROLESAGEVERSIONk8s-master Ready<none>19m v1.18.3
5.6 授权apiserver访问kubelet
cat>apiserver-to-kubelet-rbac.yaml<<EOFapiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate:"true"labels:kubernetes.io/bootstrapping:rbac-defaults name:system:kube-apiserver-to-kubeletrules:-apiGroups:-""resources:-nodes/proxy-nodes/stats-nodes/log-nodes/spec-nodes/metrics-pods/log verbs:-"*"---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:system:kube-apiservernamespace:""roleRef:apiGroup:rbac.authorization.k8s.io kind:ClusterRole name:system:kube-apiserver-to-kubeletsubjects:-apiGroup:rbac.authorization.k8s.io kind:User name:kubernetesEOFkubectl apply-f apiserver-to-kubelet-rbac.yaml
5.7增加worke 节点
1. 拷贝已部署好的Node相关文件到新节点
在master节点将Worker Node涉及文件拷贝到节点172.16.210..54/55
scp-r/opt/kubernetes root@172.16.210.54:/opt/scp-r/usr/lib/systemd/system/{kubelet,kube-proxy}.service root@172.16.210.54:/usr/lib/systemd/systemscp-r/opt/cni/root@172.16.210.54:/opt/scp/opt/kubernetes/ssl/ca.pem root@172.16.210.54:/opt/kubernetes/ssl
2. 删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig rm -f /opt/kubernetes/ssl/kubelet*
3. 修改主机名
sed -i 's/k8s-master/k8s-node1/g' /opt/kubernetes/cfg/kubelet.conf /opt/kubernetes/cfg/kube-proxy-config.yml ##加入node2的主机只需要把这条命令的k8s-node1改成k8s-node2即可
4. 启动并设置开机启动
systemctl daemon-reloadsystemctl start kubeletsystemctl enable kubeletsystemctl start kube-proxysystemctl enable kube-proxy
5.在Master上批准新Node kubelet证书申请
kubectlgetcsrNAMEAGESIGNERNAMEREQUESTORCONDITIONnode-csr--t2cjSYX0z7ba4Tyh4GCnngZaGBUwmAHyY1xuxU40j028s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrapPendingkubectl certificate approve node-csr--t2cjSYX0z7ba4Tyh4GCnngZaGBUwmAHyY1xuxU40j0
6. 查看Node状态
kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master Ready 46m v1.18.3k8s-node1 Ready 8m57s v1.18.3k8s-node2 Ready 3m59s v1.18.3
Node2(172.16.210.55 )节点同上。记得修改主机名
六、部署Dashboard和CoreDNS
Dashboard的部署可以看我另一篇文档,这里就不部署了
https://www.jianshu.com/p/6bafe568f103
部署CoreDNS
CoreDNS用于集群内部Service名称解析
kubectl apply-f coredns.yaml kubectlgetpods-n kube-system ##查看coredns的pod是否运行正常NAMEREADYSTATUSRESTARTSAGEcoredns-5ffbfd976d-rkcmt1/1Running023skube-flannel-ds-amd64-2kmcm1/1Running014mkube-flannel-ds-amd64-p9tdp1/1Running039mkube-flannel-ds-amd64-zg7xz1/1Running019m
测试
kubectl run-it--rm dns-test--image=busybox:1.28.4shIf you don't see a command prompt,trypressing enter./# nslookup kubernetesServer:10.0.0.2Address1:10.0.0.2kube-dns.kube-system.svc.cluster.localName:kubernetesAddress1:10.0.0.1kubernetes.default.svc.cluster.local
能正常解析,说明没问题