Kubernetes 1.41.1 内网集群部署

第1部分:主机环境初始化

所有主机采用 CentOS 7 操作系统,我们通过脚本实现批量初始化主机的环境。在每个主机都执行下面的脚本(找一个web服务放上脚本然后每个主机 curl ..... | bash

此脚本依次进行如下操作(部分地方需要你改成适合你的配置):

  1. 关闭SELinux
  2. 关闭Firewalld防火墙
  3. 设置内网DNS
  4. 使用内网Yum源
  5. 使用内网NTP时钟服务
  6. 更新主机名
  7. 关闭rsyslog日志服务
#!/usr/bin/env bash
set -o errexit -o nounset -o pipefail
echo ">>> Kernel: $(uname -r)"

echo ">>> 1. Disabling SELinux"
cp /etc/sysconfig/selinux{,.orig}
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
eval `setenforce 0`

echo ">>> 2. Disabling Firewalld"
systemctl stop firewalld
systemctl disable firewalld

echo ">>> 3. Adjusting DNS Configuration"
cp /etc/resolv.conf{,.orig}
cat << 'EOF' > "/etc/resolv.conf"
domain cloud.company.com
search cloud.company.com company.com
options timeout:2
nameserver 10.1.1.2
nameserver 10.1.1.1
EOF

echo ">>> 4. Adjusting Yum Source to local"
mv /etc/yum.repos.d/CentOS-Base.repo{,.orig}
curl http://yum.cloud.company.com/CentOS-Base.repo -o /etc/yum.repos.d/CentOS-Base.repo 

echo ">>> 5. Enabling NTP"
yum -y install chrony
cp /etc/chrony.conf{,.orig}
cat << 'EOF' > "/etc/chrony.conf"
server ntp.cloud.company.com iburst
server 10.1.1.2 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
systemctl enable chronyd
systemctl start chronyd
timedatectl set-ntp true
timedatectl set-local-rtc true

echo ">>> 6. Updating hostname"
HOSTNAME=$(hostname -s)
hostnamectl set-hostname ${HOSTNAME}.cloud.company.com

echo ">>> 7. Disable rsyslog"
systemctl disable rsyslog

sync

第2部分:安装docker环境

在所有主机上都执行下列命令。命令首先安装了 docker-ce-18.06.2 ,然后还设置了 docker 使用 overlay2。参见 CRI installation

echo ">>> Install Docker"
yum install -y docker-ce-18.06.2.ce
systemctl enable docker

echo ">>> Customizing Docker"
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
systemctl daemon-reload
systemctl start docker

第3部分:安装Kubernetes核心组件

我们使用 kubeadm 工具来安装 kubernetes,这部分步骤参考了 使用 kubeadm 创建一个单主集群

3.1 master节点安装

首先通过 yum 安装 kubeletkubeadmkubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

然后配置路由:

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

然后是准备 kubernetes 组件的镜像,镜像的清单可以通过 kubeadm config images list 命令获取。
由于我们内网已经存在使用 Harbor 搭建的 Docker 镜像仓库,我就把镜像都设法转存到镜像仓库,然后 Master 节点从本地镜像仓库获取 Kubernetes 镜像。脚本如下:

images=(kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10 coredns:1.3.1)
for imageName in ${images[@]} ; do
  docker pull harbor.cloud.company.com/mirror/$imageName
  docker tag harbor.cloud.company.com/mirror/$imageName k8s.gcr.io/$imageName
  docker rmi harbor.cloud.company.com/mirror/$imageName
done

到目前为止, Master 节点已经准备好了 kubeletkubeadmkubectl 工具,并且镜像已经缓存到了本地,接下来就可以使用 kubeadmMaster 节点部署 Kubernetes 了。只需要执行下面一行:

kubeadm init  --pod-network-cidr=10.244.0.0/16

执行后会打印如下的日志:

I0424 16:01:53.960348   23668 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0424 16:01:53.960403   23668 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.cloud.company.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.*.*.*]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.cloud.company.com localhost] and IPs [10.*.*.* 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.cloud.company.com localhost] and IPs [10.*.*.* 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.503121 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master.cloud.company.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.cloud.company.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yci*****************ttd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.*.*.*:6443 --token yci*****************ttd \
    --discovery-token-ca-cert-hash sha256:eba**********************************d5f 

3.2 安装网络插件

Master 节点执行完 kubeadm init 后,还需要安装网络插件,我采用的是官方推荐的 flannel 作为 Kubernetes 集群的网络插件,也是一句话:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

如果遇到和我一样的内网环境,那还是先把flannel的镜像转储到本地镜像仓库,然后分发到所有节点

docker pull harbor.cloud.company.com/mirror/flannel:v0.11.0-amd64
docker tag ff281650a721 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi harbor.cloud.company.com/mirror/flannel:v0.11.0-amd64

3.3 安装node节点

首先,node 节点的准备和 master 节点差不多,也就说基本上直到 3.1 节的前半部分都需要做,区别也就2点:

  1. 不要执行 kubeadm init 😓
  2. 需要准备的镜像没那么多,只需要 kubeproxypauseflannel
images=(kube-proxy:v1.14.1 pause:3.1)
for imageName in ${images[@]} ; do
  docker pull harbor.cloud.company.com/mirror/$imageName
  docker tag harbor.cloud.company.com/mirror/$imageName k8s.gcr.io/$imageName
  docker rmi harbor.cloud.company.com/mirror/$imageName
done

node 节点准备好 kubeletkubeadmkubectl 工具,还有 kubeproxypauseflannel 镜像之后,就可以部署了,也是一句话,这里要用到 3.1节末尾的那句命令:

kubeadm join 10.*.*.*:6443 --token yci*****************ttd \
    --discovery-token-ca-cert-hash sha256:eba**********************************d5f 

执行完之后,kubeadm会自动部署 node 节点,并将 node 节点加入集群了。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,524评论 5 460
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,869评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,813评论 0 320
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,210评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,085评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,117评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,533评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,219评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,487评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,582评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,362评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,218评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,589评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,899评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,176评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,503评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,707评论 2 335

推荐阅读更多精彩内容