在CentOS7 上使用kubeadm安装Kubernetes Docker 集群

安装环境:
理想情况下应该是三台机器比较典型,但是因为本安装使用主机性能限制,仅使用两台虚拟机。

  • 3 CentOS 7 Servers
    192.168.59.192 k8s-master (2Core)
    192.168.59.193 node01
    192.168.59.194 node02 (未有,建议你使用)
  • Root privileges

单台CentOS 环境如下:

[root@centosk8s ~]# uname -a
Linux k8s-master 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@centosk8s ~]# grep 'physical id' /proc/cpuinfo | sort -u | wc -l
2

安装步骤:

  • 安装Kubernetes
  • Kubernetes集群初始化
  • 增加集群节点
  • 测试 - 创建Pod

1. 安装Kubernetes

这些步骤需要在master 和node server上都要运行。

1.1. Configure Hosts

[root@centosk8s ~]# vim /etc/hosts
192.168.59.192 k8s-master
192.168.59.193 node01
hostnamectl set-hostname k8s-master

1.2. Disable SELinux

本安装不会涉及到SELinux configuration for Docker, 故我们关闭它.
关闭防火墙。

systemctl stop firewalld && systemctl disable firewalld
[root@centosk8s ~]# setenforce 0
[root@centosk8s ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

1.3. Enable br_netfilter Kernel Module

安装kubernetes需要br_netfilter模块。启用此内核模块,以便IPtables处理通过网桥的数据包进行过滤和端口转发,并且集群中的kubernetes数据包可以相互通信。

执行以下命令:

[root@centosk8s ~]# modprobe br_netfilter
[root@centosk8s ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@centosk8s ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

1.4. Disable SWAP

[root@centosk8s ~]# swapoff -a
[root@centosk8s ~]# vim /etc/fstab
[root@centosk8s ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Feb 18 15:40:16 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=d09e99de-a101-4c19-bea2-dfac60ae2e7d /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

1.5. 安装 Docker CE

yum install -y yum-utils device-mapper-persistent-data lvm2

[root@centosk8s ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
yum install -y docker-ce

1.6. 安装 Kubernetes

增加kubernetes repository。(因为国内的原因,此步不要做了。)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

国内使用以下方式替代:
修改CentOS-Base,

[root@centosk8s ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
[root@centosk8s ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2019-03-20 09:28:46--  http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 180.163.155.8, 101.227.0.139, 101.227.0.133, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|180.163.155.8|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

100%[=======================================================================================================================================================>] 2,523       --.-K/s   in 0.001s

2019-03-20 09:28:46 (2.65 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523]

配置kubernetes.repo ,

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg       
      http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

执行结果:

[root@centosk8s ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
>       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@centosk8s ~]# yum makecache
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                                                                                                                      | 3.6 kB  00:00:00
docker-ce-stable                                                                                                                                                          | 3.5 kB  00:00:00
extras                                                                                                                                                                    | 3.4 kB  00:00:00
kubernetes/signature                                                                                                                                                      |  454 B  00:00:00
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
kubernetes/signature                                                                                                                                                      | 1.4 kB  00:00:41 !!!
updates                                                                                                                                                                   | 3.4 kB  00:00:00
(1/3): kubernetes/filelists                                                                                                                                               |  16 kB  00:00:02
(2/3): kubernetes/primary                                                                                                                                                 |  45 kB  00:00:01
(3/3): kubernetes/other                                                                                                                                                   |  30 kB  00:00:00
kubernetes                                                                                                                                                                               323/323
kubernetes                                                                                                                                                                               323/323
kubernetes                                                                                                                                                                               323/323
Metadata Cache Created

安装 kubernetes packages kubeadm, kubelet, and kubectl.

yum install -y kubelet kubeadm kubectl

安装完成,重启服务器

reboot

启动 services, docker and kubelet.

systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

1.7. 修改 cgroup-driver

确保 docker-ce 和 kubernetes 使用同样的 'cgroup'.

首先检查 docker cgroup .

[root@k8sminion ~]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

可以看到'cgroupfs' 已被作为cgroup-driver.

使用 'cgroupfs'替换 kuberetes cgroup-driver .

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Reload the systemd system and restart the kubelet service.

systemctl daemon-reload
systemctl restart kubelet

2. Kubernetes集群初始化

在本步骤中,初始化 kubernetes master cluster 配置.

登录到 'k8s-master' 使用以下命令创建 kubernetes master.

kubeadm 是kubernetes 的集群安装工具,能够快速安装kubernetes 集群。kubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io,国内无法直接访问,需要中转一下才可以使用。
通过 docker.io/mirrorgooglecontainers中转一下https://hub.docker.com/u/mirrorgooglecontainers

查看 kubeadm 会用到的镜像

[root@centosk8s ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

拉取镜像并设置tag

[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-apiserver
3d80316e96d7: Pull complete
27ea5b112863: Pull complete
Digest: sha256:b205bb95ca597510be7785f65c15123830c2b0978af9abf1be60d67ec49573ff
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-controller-manager
3d80316e96d7: Already exists
cdd81260a26d: Pull complete
Digest: sha256:2d977f0ea449497deb35478ea59b8637bb478cdda42f6c01b09431b77d61af49
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-scheduler
3d80316e96d7: Already exists
0f2f7ad628c3: Pull complete
Digest: sha256:09bd0a85d002b2f2570b870f672c80c5a05a30e108b976efe279f0fc67a004b3
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-proxy
3d80316e96d7: Already exists
09263547f210: Pull complete
59c4a3c9440d: Pull complete
Digest: sha256:244282d1be8d814b8ea70f6e4890d0031b00a148f2d3d4953e062fb46da229c4
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/etcd:3.2.24
3.2.24: Pulling from mirrorgooglecontainers/etcd
019658bedd5c: Pull complete
c4267897bb00: Pull complete
c5b72c728005: Pull complete
Digest: sha256:08b3afd3485fc29e78b28d05b434d2524f9bbfd8dec7464c396e2679541c91fc
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.2.24
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@centosk8s ~]# docker pull docker.io/coredns/coredns:1.2.6
1.2.6: Pulling from coredns/coredns
2796eccf0de2: Pull complete
6ad5128a7d32: Pull complete
Digest: sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Status: Downloaded newer image for coredns/coredns:1.2.6
[root@centosk8s ~]# docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

初始化

kubeadm init --apiserver-advertise-address=192.168.59.192 --pod-network-cidr=10.18.0.0/16

注:

--apiserver-advertise-address = determines which IP address Kubernetes should advertise its API server on.

--pod-network-cidr = specify the range of IP addresses for the pod network. We're using the 'flannel' virtual network. If you want to use another pod network such as weave-net or calico, change the range IP address.

初始化完成, 输出如下:

[root@centosk8s ~]# kubeadm init --apiserver-advertise-address=192.168.59.192 --pod-network-cidr=10.18.0.0/16
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.59.192 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.59.192 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.59.192]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.003836 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nynqmr.zjacu8opmi8zb1xb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959

注:

把 'kubeadm join ... ... ...' 命令复制到文本编辑器. 此命令将用来向 kubernetes 集群注册节点.

为了使用Kubernetes, 我们需要执行下面的命令。

创建 '.kube' 配置目录 并复制 'admin.conf'.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

下一步,部署flannel network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

结果如下:

[root@centosk8s ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

flannel network 部署到了 Kubernetes cluster.

等几分钟,检查一下kubernetes node and pods:

kubectl get nodes
kubectl get pods --all-namespaces
[root@centosk8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   7m10s   v1.13.4
[root@centosk8s ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-pdhjg             1/1     Running   0          7m28s
kube-system   coredns-86c58d9df4-qhngt             1/1     Running   0          7m28s
kube-system   etcd-k8s-master                      1/1     Running   0          6m37s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          6m51s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          6m32s
kube-system   kube-flannel-ds-amd64-lqh6h          1/1     Running   0          117s
kube-system   kube-proxy-vx62g                     1/1     Running   0          7m28s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          6m41s

将看到“k8s master”节点,状态为“ready”,并且将看到集群的所有pods,包括用于网络pod配置的“kube flannel ds”。

确保所有的 kube-system pods 状态是 'running'.
Kubernetes集群 master 初始化和配置完成。

3. 增加集群节点

本步骤中,我们向‘k8s’集群加入节点。

[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-apiserver
3d80316e96d7: Pull complete
27ea5b112863: Pull complete
Digest: sha256:b205bb95ca597510be7785f65c15123830c2b0978af9abf1be60d67ec49573ff
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4
[root@k8sminion ~]#  docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-controller-manager
3d80316e96d7: Already exists
cdd81260a26d: Pull complete
Digest: sha256:2d977f0ea449497deb35478ea59b8637bb478cdda42f6c01b09431b77d61af49
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-scheduler
3d80316e96d7: Already exists
0f2f7ad628c3: Pull complete
Digest: sha256:09bd0a85d002b2f2570b870f672c80c5a05a30e108b976efe279f0fc67a004b3
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-proxy
3d80316e96d7: Already exists
09263547f210: Pull complete
59c4a3c9440d: Pull complete
Digest: sha256:244282d1be8d814b8ea70f6e4890d0031b00a148f2d3d4953e062fb46da229c4
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4
[root@k8sminion ~]#  docker pull docker.io/mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/etcd:3.2.24
3.2.24: Pulling from mirrorgooglecontainers/etcd
019658bedd5c: Pull complete
c4267897bb00: Pull complete
c5b72c728005: Pull complete
Digest: sha256:08b3afd3485fc29e78b28d05b434d2524f9bbfd8dec7464c396e2679541c91fc
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.2.24
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@k8sminion ~]# docker pull docker.io/coredns/coredns:1.2.6
1.2.6: Pulling from coredns/coredns
2796eccf0de2: Pull complete
6ad5128a7d32: Pull complete
Digest: sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Status: Downloaded newer image for coredns/coredns:1.2.6
[root@k8sminion ~]# docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

登录到node01,执行如下命令。

kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959
[root@k8sminion ~]# kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.59.192:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.59.192:6443"
[discovery] Requesting info from "https://192.168.59.192:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.59.192:6443"
[discovery] Successfully established connection with API Server "192.168.59.192:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

等待几分钟,在'k8s-master' 上检查nodes和pods状态。

kubectl get nodes
kubectl get pods --all-namespaces

现在,node01被加入了集群,状态是'ready'.

[root@centosk8s ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   24m    v1.13.4
node01       NotReady   <none>   104s   v1.13.4
[root@centosk8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   31m     v1.13.4
node01       Ready    <none>   8m50s   v1.13.4
[root@centosk8s ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-pdhjg             1/1     Running   0          31m
kube-system   coredns-86c58d9df4-qhngt             1/1     Running   0          31m
kube-system   etcd-k8s-master                      1/1     Running   0          31m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          31m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          30m
kube-system   kube-flannel-ds-amd64-dl262          1/1     Running   0          9m31s
kube-system   kube-flannel-ds-amd64-lqh6h          1/1     Running   0          26m
kube-system   kube-proxy-vx62g                     1/1     Running   0          31m
kube-system   kube-proxy-x5gnm                     1/1     Running   0          9m31s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          31m

Kubernetes的node,NotReady 如何查问题:

journalctl -f -u kubelet

4. 测试-创建 Pod

在本步骤,我们将尝试在 kubernetes集群部署Nginx pod。pod是一个或多个容器的组,他们在 kubernetes中共享存储和网络。

登陆 'k8s-master' server 创建一个名叫 'nginx' 的部署。

kubectl create deployment nginx --image=nginx

通过下面的命令获得nginx pod 部署信息。

[root@centosk8s ~]# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 20 Mar 2019 14:28:15 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-5c7588df (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  42s   deployment-controller  Scaled up replica set nginx-5c7588df to 1

下一步,暴露nginx pod, 需要创建一个新的service NodePort

执行 kubectl 命令.

kubectl create service nodeport nginx --tcp=80:80

确保没有错误信息。现在可以检查Nginx服务IP和端口了。

kubectl get pods
kubectl get svc
[root@centosk8s ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginx-5c7588df-mdnbf   1/1     Running   0          2m10s
[root@centosk8s ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        57m
nginx        NodePort    10.98.119.101   <none>        80:30039/TCP   45s

现在可以访问运行在集群IP address '10.98.119.101' port 80 下的nginx pod, 节点主IP地址 '192.168.59.193' 端口 '30039'.

从 'k8s-master' server 执行Curl命令 curl command .

curl node01:30039
[root@centosk8s ~]# curl node01:30039
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@centosk8s ~]#

nginx pod现在已经部署在kubernetes集群下,可以通过互联网访问了。

访问以下地址.
http://192.168.59.192:30039/

image.png

到此胜利结束!

5. 参考

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,088评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,715评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,361评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,099评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 60,987评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,063评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,486评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,175评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,440评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,518评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,305评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,190评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,550评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,880评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,152评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,451评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,637评论 2 335

推荐阅读更多精彩内容