Kubernetes部署之Kubeadm

安装步骤:

系统是CentOS 7.4

1. 禁用swap文件

然后需要禁用swap文件,这是Kubernetes的强制步骤。实现它很简单,编辑/etc/fstab文件,注释掉引用swap的行,保存并重启后输入sudo swapoff -a即可
对于禁用swap内存,你可能会有点不解,具体原因可以查看Github上的Issue:Kubelet/Kubernetes should work with Swap Enabled

2. 配置/etc/hosts

cat >> /etc/hosts << EOF
192.168.56.101 master
192.168.56.102 node1
192.168.56.103 node2
EOF

(1)关闭swap

swapoff -a

(2)关闭SELinux,修改SELINUX属性

#setenforce 0
#vi /etc/sysconfig/selinux
SELINUX=disabled

(3)设置iptables

#vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0

同iptables
sysctl --system


3. 需要安装一下yum包:

yum install kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1 kubernetes-cni

配置kubernetes源, 使用阿里云的源

cat  > /etc/yum.repos.d/kubernetes.repo   <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

安装yum包

yum install  kubelet-1.11.1 kubeadm-1.11.1  kubectl-1.11.1 

4.下载安装k8s依赖镜像

注意:此步骤非常关键,kubenetes初始化启动会依赖这些镜像,天朝的网络肯定是拉不下来google的镜像的,一般人过了上一关,这一关未必过的去,一定要提前把镜像下载到本地,kubeadm安装才会继续,下面我会列出来master节点和node依赖的镜像列表。
有个技术大牛把gcr.io的镜像,
每天同步到https://github.com/anjia0532/gcr.io_mirror这个站点,因此,如果需要用到gcr.io的镜像,可以执行如下的脚本进行镜像拉取

Master node需要的服务组件:

$ kubeadm config images list
k8s.gcr.io/kube-apiserver-amd64:v1.11.4
k8s.gcr.io/kube-controller-manager-amd64:v1.11.4
k8s.gcr.io/kube-scheduler-amd64:v1.11.4
k8s.gcr.io/kube-proxy-amd64:v1.11.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3


vim pullimages.sh

#!/bin/bash
images=(
kube-apiserver-amd64:v1.11.4
kube-controller-manager-amd64:v1.11.4
kube-scheduler-amd64:v1.11.4
kube-proxy-amd64:v1.11.4
pause:3.1
etcd-amd64:3.2.18
coredns:1.1.3
)

for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done

sh pullimages.sh

或者使用下面的源:
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/*

kubernetes集群不允许开启swap,所以我们需要忽略这个错误

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

5.使用kubeadm int, 安装Kubenates v1.11.4

编写kubeadm.yaml

[root@master ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
  horizontal-pod-autoscaler-use-rest-clients: "true"
  horizontal-pod-autoscaler-sync-period: "10s"
  node-monitor-grace-period: "10s"
apiServerExtraArgs:
  runtime-config: "api/all=true"
kubernetesVersion: "v1.11.4"
[root@master ~]# kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.11.4
[preflight] running pre-flight checks
I1113 23:10:20.954974    3516 kernel_validator.go:81] Validating kernel version
I1113 23:10:20.955095    3516 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.501829 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: jhla37.mllhf316c5q7b9lk
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.2.15:6443 --token jhla37.mllhf316c5q7b9lk --discovery-token-ca-cert-hash sha256:f760528cd6221deee37025376101c58d493b745ef3ef4fc9ee996106657e0095

配置认证的配置文件:
root用户:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf"  >> /etc/profile
source /etc/profile

普通用户:

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile
source $HOME/.bash_profile

如果token失效,使用以下命令重新创建一个token。 token默认24小时后过期

[stadmin@master ~]$ kubeadm token create
j2oyxt.rb8ei1avfmkltnls

kubeadm token list命令也可以查看token:

[root@master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
jhla37.mllhf316c5q7b9lk 22h 2018-11-14T23:11:09+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值, 这个跟init安装时的值是一样的:

[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be

重新生产加入集群的命令:

[root@node1 ~]# kubeadm join 192.168.56.101:6443 --token j2oyxt.rb8ei1avfmkltnls --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be

部署完成Kubenates了, 检查一下状态:

kubectl get cs
kubectl get nodes
kubectl describe node master
[stadmin@master ~]$ kubectl get pods -n kube-system
NAME                             READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-m8nd8         0/1       Pending   0          21m
coredns-78fcdf6894-vq884         0/1       Pending   0          21m
etcd-master                      1/1       Running   0          3s
kube-apiserver-master            1/1       Running   0          3s
kube-controller-manager-master   1/1       Running   0          3s
kube-proxy-pp2lk                 1/1       Running   0          21m
kube-scheduler-master            1/1       Running   0          3s

Master节点,最后还要安装网络插件:

$ kubectl apply -f https://git.io/weave-kube-1.6

或者先下载下来:

$kubectl apply -f weave-kube-1.6.yaml

到这里,完成了Master的部署,但是最后我发现, API-Server的IP,居然使用的是NAT网卡的IP, 不能用于集群间的通讯,只要重装一遍,并指定API-Server的IP使用Host-Only网卡的IP。

官方文档说明如下:
Kubernets API Server进程提供Kuvernetes API。通常情况下,有一个进程运行在单一kubernetes-master节点上。
默认情况,Kubernetes API Server提供HTTP的两个端口:
1.本地主机端口
HTTP服务
默认端口8080,修改标识–insecure-port
默认IP是本地主机,修改标识—insecure-bind-address
在HTTP中没有认证和授权检查
主机访问受保护
2.Secure Port
默认端口6443,修改标识—secure-port
默认IP是首个非本地主机的网络接口,修改标识—bind-address
HTTPS服务。设置证书和秘钥的标识,–tls-cert-file,–tls-private-key-file
认证方式,令牌文件或者客户端证书
使用基于策略的授权方式
3.移除:只读端口
基于安全考虑,会移除只读端口,使用Service Account代替。

所以,就需要重新配置原来的Kubenates, 修改kubeadm.yaml

加入api: advertiseAddress: 192.168.56.101
如果是使用参数来配置的,就需要加--apiserver-advertise-address=192.168.56.101

[root@master ~]# cat kubeadm.yaml 
api: 
  advertiseAddress: 192.168.56.101
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
  horizontal-pod-autoscaler-use-rest-clients: "true"
  horizontal-pod-autoscaler-sync-period: "10s"
  node-monitor-grace-period: "10s"
apiServerExtraArgs:
  runtime-config: "api/all=true"
kubernetesVersion: "v1.11.4"

删除原来的Kubenates集群:

[root@master ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

最后再重新init和安装网络插件:

[root@master ~]# kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.11.4
[preflight] running pre-flight checks
I1114 23:27:38.266833    6682 kernel_validator.go:81] Validating kernel version
I1114 23:27:38.266938    6682 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.502879 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: k3uv7z.nbu8jzxdfl3gs4ui
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.56.101:6443 --token k3uv7z.nbu8jzxdfl3gs4ui --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be

kubectl apply -f https://git.io/weave-kube-1.6

或者先下载下来:

$kubectl apply -f weave-kube-1.6.yaml

部署Worker节点:

Worker节点需要下载的镜像比较少,只需要如下几个:

[root@node1 ~]# cat pullimages.sh 
#!/bin/bash
images=(
kube-proxy-amd64:v1.11.4
pause:3.1
coredns:1.1.3
)

for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done
[root@node1 ~]# kubeadm join 192.168.56.101:6443 --token k3uv7z.nbu8jzxdfl3gs4ui --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I1114 23:37:24.655522    1713 kernel_validator.go:81] Validating kernel version
I1114 23:37:24.655617    1713 kernel_validator.go:96] Validating kernel config
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "192.168.56.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.101:6443"
[discovery] Requesting info from "https://192.168.56.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.56.101:6443"
[discovery] Successfully established connection with API Server "192.168.56.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

work节点此时运行kubectl get nodes还会报错,需要配置以下的环境变量

# cp /etc/kubernetes/kubelet.conf $HOME/
# chown $(id -u):$(id -g) $HOME/kubelet.conf
# export KUBECONFIG=$HOME/kubelet.conf

Node节点,最后还要安装网络插件:

$ kubectl apply -f https://git.io/weave-kube-1.6

或者先下载下来:

$kubectl apply -f weave-kube-1.6.yaml

在Master节点上移除污点,使其可以被调度,安装存储组件

kubectl taint nodes --all node-role.kubernetes.io/master-

在Master节点上,部署可视化组件

# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0
docker tag  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0   k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker rmi  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 

修改kubernetes-dashboard.yaml,可以直接token认证进入

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  # 添加Service的type为NodePort
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      # 添加映射到虚拟机的端口,k8s只支持30000以上的端口
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f kubernetes-dashboard.yaml

查看安装情况

kubectl get pods -n rook-ceph-system
kubectl get pods -n rook-ceph

开启服务

nohup kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --disable-filter=true &

获取token命令

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token

访问dashboard

通过node节点的ip,加刚刚我们设置的nodePort就可以访问了。

https://192.168.56.101:30001/

备忘:

查看全部节点

kubectl get pods --all-namespaces

查看pods

kubectl describe pod -n kube-system

查看具体问题

kubectl describe pod kubernetes-dashboard-767dc7d4d-mg5gw -n kube-system

kubectl get pods --all-namespaces

pod启动失败的log

如果出现Error、Pending、ImagePullBackOff、CrashLoopBackOff都属于启动失败的Pod,原因需要仔细排除

a、查看 /var/log/messages系统日志
b、kubectl describe pod kube-flannel-ds-2wk55 --namespace=kube-system
c、kubectl logs -f kube-dns-2425271678-37lf7 -n kube-system kubedns

部署存储插件:

kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml

查看安装情况:

kubectl get pods -n rook-ceph-system
kubectl get pods -n rook-ceph


下面是安装1.12.4失败的记录, 后面又换低版本,安装1.11.1

kubeadm init \
>   --kubernetes-version=v1.12.2 \
>   --pod-network-cidr=10.244.0.0/16 \
>   --apiserver-advertise-address=192.168.56.101

[root@master containers]# kubeadm init
--kubernetes-version=v1.12.2
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address=192.168.56.101
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks

[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

打开 kubelet
开机自启动

systemctl enable kubelet

打开kubelet服务, 因为有墙,下载镜像时会报错,需要把提示需要使用的镜像,使用国内镜像,先下载到本地

systemctl start kubelet

使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制。 修改/etc/sysconfig/kubelet,加入:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

参考文章:
https://www.datayang.com/article/45
https://www.kubernetes.org.cn/4619.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,098评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,213评论 2 380
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 149,960评论 0 336
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,519评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,512评论 5 364
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,533评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,914评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,574评论 0 256
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,804评论 1 296
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,563评论 2 319
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,644评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,350评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,933评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,908评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,146评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,847评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,361评论 2 342

推荐阅读更多精彩内容

  • Kubernetes部属说明 环境准备 资料准备 开始部属 编写yaml文件,用于部属Kubenetes基础容器e...
    俊逸之光阅读 2,599评论 2 0
  • 在一个清爽明朗的日子里,无意间看到一个公园。在市区里,但没有市区的喧嚣。 不知道是不是因为快要过节,门前绿化十分规...
    小木头_0931阅读 143评论 0 0
  • 安乐沧桑见得多 不见得全懂 人前背后作婉委 反作梗于胸 是追是悔是缘 已不同前十年 我一味信守岿然的改变 —— 痴...
    曲习阅读 297评论 0 0
  • 在这个时代,女人的幸福不是非要绑上一个男人才可以实现。但如果你伤了一个女人的心,你就是九头牛也拉不回来的。...
    渔渔说阅读 346评论 0 0