kubeadm安装kubernetes v1.19.0
20200929更新
说明:
集群环境:
Vmware 15
操作系统:CentOS 7
集群信息:
k8s-master 172.16.22.30 2核2G
k8s-node1 172.16.22.31 2核2G
k8s-node2 172.16.22.32 2核2G
一、配置系统
master 和node 上执行
1.1 关闭防火墙 (所有节点执行)
systemctl stop firewalld && systemctl disable firewalld
1.2 关闭selinux (所有节点执行)
vi /etc/selinux/config
把selinux = enforceing 改为 selinux = disable
1.3 配置/etc/hosts/ 和hostname (所有节点执行)
修改hostname
vi /etc/hostname #master上执行就写master node上执行就写node
master
修改hosts
vi /etc/hosts #填写Ip地址和对应的hostsname
1.4 关闭swap 并禁止开机启动 (所有节点执行)
swapoff -a
swapon -s
vi /etc/fstab
#修改/etc/fstab 将swap 注释掉。
1.5 配置yum 源为阿里云源 (所有节点执行)
执行下面命令
vi /etc/yum.repos.d/k8s.repo
将下面内容粘贴进去
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg\
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
1.6 安装docker (所有节点执行)
1.6.1、卸载已安装的docker
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
1.6.2、安装所需的软件包。
yum-utils 提供了 yum-config-manager ,并且 device mapper 存储驱动程序需要 device-mapper-persistent-data 和 lvm2
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1.6.3 安装 Docker Engine-Community
yum install docker-ce docker-ce-cli containerd.io
1.6.4 配置docker加速器
vi /etc/docekr/daemon.json
添加如下内容
{
"registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
"exec-opts": ["native.cgroupdriver=systemd"] 是配置Cgroup 为systemcd
1.6.5 启动docker 并设置开机启动
systemctl restart docker && systemctl enable docker
1.7 配置参数 (所有节点执行)
官方地址:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
1.7.1、设置相关属性
modprobe br_netfilter
1.7.2、编辑
vi /etc/sysctl.d/k8s.conf
1.7.3、加入如下内容,然后保存
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
1.7.4、然后执行如下命令
sysctl --system
二、开始安装
2.1 开始安装 kubeadm, kubectl ,kubelet (所有节点执行)
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
2.2 启动kubelet (所有节点执行)
systemctl restart kubelet ; systemctl enable kubelet
2.3 初始化集群(在master上执行)
官方文档地址:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
kubeadm init --image-repository registry.aliyuncs.com/k8sxio --kubernetes-version=v1.19.0 --pod-network-cidr=10.244.0.0/16
POD的网段为: 10.244.0.0/16 ,
仓库地址用的是国内的地址:--image-repository registry.aliyuncs.com/k8sxio
返回信息:
......
......
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
[https://kubernetes.io/docs/concepts/cluster-administration/addons/](https://kubernetes.io/docs/concepts/cluster-administration/addons/)
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.22.31:6443 --token rclr54.vb88o0i8c3a0az1d \
--discovery-token-ca-cert-hash sha256:da6004936b11dce08b430dea3a566df7797583d1752b9c2e2e9843c733846ee5
2.4 根据提示执行(在master上执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.5 配置node加入集群 (在node1和node2分别执行)
kubeadm join 172.16.22.31:6443 --token rclr54.vb88o0i8c3a0az1d \
--discovery-token-ca-cert-hash sha256:da6004936b11dce08b430dea3a566df7797583d1752b9c2e2e9843c733846ee5
2.6 安装网络
1、Flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2.calico (二选一) (2020-02-22 更新)
calico 安装方式:
官网地址:
https://docs.projectcalico.org/getting-started/kubernetes/quickstart
1、下载文件:https://docs.projectcalico.org/manifests/tigera-operator.yaml
2、下载文件:https://docs.projectcalico.org/manifests/custom-resources.yaml
3、修改下载的第二个文件 custom-resources.yaml
的IP为集群初始化时设定的IP:
依次执行:
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
执行 watch kubectl get pods -n calico-system
查看安装结果
状态全为running