欢迎访问我的GitHub
https://github.com/zq2599/blog_demos
内容:所有原创文章分类汇总及配套源码,涉及Java、Docker、Kubernetes、DevOPS等;
关于kubespray
Kubespray是开源的kubernetes部署工具,整合了ansible,可以方便的部署高可用集群环境,官网地址:https://github.com/kubernetes-sigs/kubespray,本文是用kubespray-2.14.2版本部署kuberneteskubernetes-1.18.10版本的实战;
重要前提
本次实战采用官方推荐的在线安装,因此会去谷歌镜像仓库下载镜像,<font color="red">需要您的网络可以访问谷歌服务</font>;
机器信息
- 因为作者太穷,本次实战筹集到共计两台机器,它们的主机名、IP地址和作用描述如下:
主机名 | IP地址 | 作用 | 操作系统 |
---|---|---|---|
ansible | 192.168.50.134 | ansible主机 | CentOS7 |
node1 | 192.168.50.27 | k8s服务器 | ubuntu-20.04.1 |
- 可见kubernetes是被部署在<font color="blue">ubuntu电脑</font>上;
标准化设置
ubuntu电脑要做以下设置:
- 修改/etc/hostname,设置好主机名
- 修改/etc/hosts,将自己的主机名和IP地址添加进去
- 关闭防火墙
ufw disable
- 再次查看应该是关闭状态
root@ideapad:~# ufw status
状态:不活动
- 关闭selinux,如果提示安装<font color="blue">selinux-utils</font>,表示selinux没有安装,就不用关系了
setenforce 0
- ipv4网络设置
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
sysctl -w net.ipv4.ip_forward=1
- 立即禁用交换分区
swapoff -a
- 我的电脑上,禁用前的内存情况
root@ideapad:~# free -m
总计 已用 空闲 共享 缓冲/缓存 可用
内存: 31913 551 30288 137 1073 30839
交换: 2047 0 2047
- 执行了<font color="blue">swapoff -a</font>后再看,可见全部为0了
root@ideapad:~# free -m
总计 已用 空闲 共享 缓冲/缓存 可用
内存: 31913 557 30281 137 1073 30833
交换: 0 0 0
-
以上禁用交换分区的方法,虽然立即生效了,但是重启电脑后依旧恢复了交换分区的使用,要彻底禁用,请打开文件<font color="blue">/etc/fstab</font>,在下图红框这一行最前面添加<font color="red">#</font>
ansible主机免密码ssh登录
- ssh登录ansible主机;
- 生成ssh公私钥,输入命令<font color="blue">ssh-keygen</font>,然后连续四次回车:
- 输入命令<font color="blue">ssh-copy-id root@192.168.50.27</font>,将ansible的ssh分发给ubuntu主机,会要求输入yes和ubuntu主机的root账号的密码,完成输入后,以后ansible就可以免密码ssh登录ubuntu主机了:
ansible主机操作
- ssh登录ansible主机;
- 安装ansible应用:
yum install -y epel-release ansible
- 安装pip:
easy_install pip
- 通过pip安装jinja2:
pip2 install jinja2 --upgrade
- 安装python36:
yum install python36 -y
- 创建工作目录,进入工作目录:
mkdir /usr/local/kubespray && cd /usr/local/kubespray/
- 下载kubespray,我这里下载的是<font color="blue">v2.14.2</font>版本:
wget https://github.com/kubernetes-sigs/kubespray/archive/v2.14.2.tar.gz
- 解压:
tar -zxvf v2.14.2.tar.gz
- 进入解压后的目录:
cd kubespray-2.14.2/
- 安装kubespray所需的应用(注意是<font color="red">pip3</font>):
pip3 install -r requirements.txt
- 复制一份demo配置信息到目录<font color="blue">inventory/mycluster</font>:
cp -rfp inventory/sample inventory/mycluster
- 进去看一下,可见mycluster目录下复制了很多文件:
[root@kubespray kubespray-2.14.2]# tree inventory/
inventory/
├── local
│ ├── group_vars -> ../sample/group_vars
│ └── hosts.ini
├── mycluster
│ ├── group_vars
│ │ ├── all
│ │ │ ├── all.yml
│ │ │ ├── aws.yml
│ │ │ ├── azure.yml
│ │ │ ├── containerd.yml
│ │ │ ├── coreos.yml
│ │ │ ├── docker.yml
│ │ │ ├── gcp.yml
│ │ │ ├── oci.yml
│ │ │ ├── openstack.yml
│ │ │ └── vsphere.yml
│ │ ├── etcd.yml
│ │ └── k8s-cluster
│ │ ├── addons.yml
│ │ ├── k8s-cluster.yml
│ │ ├── k8s-net-calico.yml
│ │ ├── k8s-net-canal.yml
│ │ ├── k8s-net-cilium.yml
│ │ ├── k8s-net-contiv.yml
│ │ ├── k8s-net-flannel.yml
│ │ ├── k8s-net-kube-router.yml
│ │ ├── k8s-net-macvlan.yml
│ │ └── k8s-net-weave.yml
│ └── inventory.ini
└── sample
├── group_vars
│ ├── all
│ │ ├── all.yml
│ │ ├── aws.yml
│ │ ├── azure.yml
│ │ ├── containerd.yml
│ │ ├── coreos.yml
│ │ ├── docker.yml
│ │ ├── gcp.yml
│ │ ├── oci.yml
│ │ ├── openstack.yml
│ │ └── vsphere.yml
│ ├── etcd.yml
│ └── k8s-cluster
│ ├── addons.yml
│ ├── k8s-cluster.yml
│ ├── k8s-net-calico.yml
│ ├── k8s-net-canal.yml
│ ├── k8s-net-cilium.yml
│ ├── k8s-net-contiv.yml
│ ├── k8s-net-flannel.yml
│ ├── k8s-net-kube-router.yml
│ ├── k8s-net-macvlan.yml
│ └── k8s-net-weave.yml
└── inventory.ini
10 directories, 45 files
- 设置集群信息(当前目录仍旧是kubespray-2.14.2):
declare -a IPS=(192.168.50.27)
- 配置ansible:
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
- 此时kubespray的脚本根据输入的IP信息做好了集群规划,具体信息可见<font color="blue">inventory/mycluster/hosts.yml</font>,如下所示,您也可以自行修改此文件:
[root@kubespray kubespray-2.14.2]# cat inventory/mycluster/hosts.yml
all:
hosts:
node1:
ansible_host: 192.168.50.27
ip: 192.168.50.27
access_ip: 192.168.50.27
children:
kube-master:
hosts:
node1:
kube-node:
hosts:
node1:
etcd:
hosts:
node1:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
- 执行以下命令即可开始安装,在线安装比较耗时请耐心等待:
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
- 遇到网络问题失败退出时很常见的事情,此时将上述命令重复执行即可,ansible对于已经执行过的命令会跳过的;
- 安装完成时控制台输出类似如下的信息(太多了,省略了一些):
Saturday 21 November 2020 17:47:18 +0800 (0:00:00.025) 0:30:03.154 *****
Saturday 21 November 2020 17:47:18 +0800 (0:00:00.024) 0:30:03.179 *****
PLAY RECAP **********************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=591 changed=95 unreachable=0 failed=0 skipped=1131 rescued=0 ignored=0
Saturday 21 November 2020 17:47:18 +0800 (0:00:00.021) 0:30:03.200 *****
===============================================================================
download : download_file | Download item ------------------------------------------------------------------------------------------------------------------ 1008.61s
kubernetes/preinstall : Update package management cache (APT) ---------------------------------------------------------------------------------------------- 119.25s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 42.36s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 38.26s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 37.31s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 36.60s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 35.01s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 34.00s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 30.55s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 27.47s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 26.78s
kubernetes/master : kubeadm | Initialize first master ------------------------------------------------------------------------------------------------------- 25.98s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 23.42s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 22.14s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 21.50s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 20.17s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------- 17.55s
container-engine/docker : ensure docker packages are installed ----------------------------------------------------------------------------------------------- 9.73s
kubernetes/master : Master | wait for kube-scheduler --------------------------------------------------------------------------------------------------------- 7.83s
kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template ---------------------------------------------------------------------------------------- 6.93s
- 至此,kubernetes集群环境部署完成,接下来简单验证一下环境是否可用;
检查环境
- ssh登录ubuntu机器;
- 查看节点、service、pod:
root@node1:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 104m v1.18.10 192.168.50.27 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic docker://19.3.12
root@node1:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 105m v1.18.10 192.168.50.27 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic docker://19.3.12
root@node1:~# kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 105m
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 104m
kube-system dashboard-metrics-scraper ClusterIP 10.233.12.230 <none> 8000/TCP 104m
kube-system kubernetes-dashboard ClusterIP 10.233.61.24 <none> 443/TCP 104m
root@node1:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6ccb68f9b5-kwqck 1/1 Running 0 104m
kube-system calico-node-4lmpf 1/1 Running 0 104m
kube-system coredns-dff8fc7d-2gnl8 1/1 Running 0 104m
kube-system coredns-dff8fc7d-4vthn 0/1 Pending 0 104m
kube-system dns-autoscaler-66498f5c5f-qh4vb 1/1 Running 0 104m
kube-system kube-apiserver-node1 1/1 Running 0 105m
kube-system kube-controller-manager-node1 1/1 Running 0 105m
kube-system kube-proxy-kk84b 1/1 Running 0 105m
kube-system kube-scheduler-node1 1/1 Running 0 105m
kube-system kubernetes-dashboard-667c4c65f8-8ckf5 1/1 Running 0 104m
kube-system kubernetes-metrics-scraper-54fbb4d595-dk42t 1/1 Running 0 104m
kube-system nodelocaldns-d69h9 1/1 Running 0 104m
- 可见一些必须的pod和服务都已经启动了,接下来试试dashboard能否正常访问;
访问dashboard
dashboard可以查看kubernetes系统的整体情况,为了访问dashboard页面,需要增加RBAC:
- ssh登录ubuntu机器;
- 执行以下命令,创建文件<font color="blue">admin-user.yaml</font>:
tee admin-user.yaml <<-'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
EOF
- 执行以下命令,创建文件<font color="blue">admin-user-role.yaml</font>:
tee admin-user-role.yaml <<-'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
- 创建ServiceAccount和ClusterRoleBinding:
kubectl create -f admin-user.yaml && kubectl create -f admin-user-role.yaml
- 将<font color="blue">kubernetes-dashboard</font>这个服务的类型从ClusterIP改为NodePort,这样我们就能从浏览器访问dashboard了:
kubectl patch svc kubernetes-dashboard -n kube-system \
> -p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
- 再看服务,已经成功改为<font color="blue">NodePort </font>:
root@node1:~# kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 132m
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 131m
kube-system dashboard-metrics-scraper ClusterIP 10.233.12.230 <none> 8000/TCP 131m
kube-system kubernetes-dashboard NodePort 10.233.61.24 <none> 443:30443/TCP 131m
- 获取token看,用于登录dashboard页面:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
- 下图红框中就是token的内容:
- 现在通过浏览器访问dashboard页面了,地址是:https://192.168.50.27:30443 ,其中<font color="blue">192.168.50.27</font>是ubuntu机器的IP地址;
- 由于不是https协议,因此浏览器可能弹出安全提示,如下图,选择<font color="blue">继续前往</font>:
- 此时页面会让您选择登录方式,选择<font color="blue">令牌</font>并输入前面得到的token,即可登录:
- 登录成功后可以见到系统信息,如下图:
至此,kubespray-2.14.2安装kubernetes-1.18.10完成,希望本文能给您一些参考。
你不孤单,欣宸原创一路相伴
欢迎关注公众号:程序员欣宸
微信搜索「程序员欣宸」,我是欣宸,期待与您一同畅游Java世界...
https://github.com/zq2599/blog_demos