k8s组件coredns以及dashboard

coredns

我们创建的service并不存在coredns里面 而是存在了etcd中 当我们请求某个域名的时候 会把请求发给coredns coredns再把请求发给apiserver apiserver 再去etcd中拿取数据返回给dns dns返回给客户端

nginx想要请求后端的tomcat 首先nginx会把tomcat的service地址发送给coredns coredns发给apiserver apiserver去etcd中拿取对应的解析地址返回给coredns nginx从coredns中拿到tomcat的ip地址后就会把客户端请求转发给tomcat


image.png

当pod需要访问k8s集群外的域名时 pod还是会把解析请求发给coredns 我们会在coredns里面配置forward转发到我们公司内部的域名服务器(bind) 如果域名是公司内部使用的 bind就会解析此域名并把ip返回给coredns
如果此域名为外部互联网域名 我们还会在bind中配置forward转发到公网dns中进行解析


image.png

我们配置service地址段的时候 此地址段的第一个IP默认分发给apiserver 第二个IP默认分发给dns

root@master-1:/etc/kubeasz/clusters/qijia01# vi hosts 
root@master-1:/etc/kubeasz/clusters/qijia01# pwd
/etc/kubeasz/clusters/qijia01
root@master-1:/etc/kubeasz/clusters/qijia01# cat hosts
.......
.......
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"                                               #service的地址段,第一个IP默认分发给apiserver   第二个IP默认分发给dns

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"                                               #pod的地址段。service和pod的地址段要确定和任何服务的地址都没有冲突,如何服务是多机房的那么这个地址段尽量不要和其他机房的地址段一样,如果地址段一样那么即便不通环境的k8s集群你用专线将网                                                                           #络打通两个集群依然无法通信
......
......
通过yaml安装coredns

在kubenetes的安装包中包含了许多组件的yaml模板 我们可以通过模板来安装coredns
打开github搜索kubernetes 选择右下角的release 选择k8s相应版本
https://github.com/kubernetes/kubernetes

image.png

需要下载下面四个包 里面包含了一些yaml文件 二进制的包等
kubernetes.tar.gz
kubernetes-client-linux-amd64.tar.gz
kubernetes-server-linux-arm64.tar.gz
kubernetes-node-linux-arm64.tar.gz
解压之后四个包中的数据都会解压到当前目录的kubernetes下
coredns的模板文件位置 kubernetes/cluster/addons/dns/coredns/

root@master-1:/data# cd /usr/local/src/
root@master-1:/usr/local/src# ll
total 462524
drwxr-xr-x  2 root root       161 Apr 20 12:08 ./
drwxr-xr-x 10 root root       114 Aug 24  2021 ../
-rw-r--r--  1 root root  29154332 Apr 15 07:49 kubernetes-client-linux-amd64.tar.gz
-rw-r--r--  1 root root 118151109 Apr 15 07:52 kubernetes-node-linux-amd64.tar.gz
-rw-r--r--  1 root root 325784034 Apr 15 08:38 kubernetes-server-linux-amd64.tar.gz
-rw-r--r--  1 root root    525314 Apr 15 06:25 kubernetes.tar.gz
root@master-1:/usr/local/src# tar xf kubernetes.tar.gz 
root@master-1:/usr/local/src# tar xf kubernetes-client-linux-amd64.tar.gz 
root@master-1:/usr/local/src# tar xf kubernetes-node-linux-amd64.tar.gz 
taroot@master-1:/usr/local/src# tar xf kubernetes-server-linux-amd64.tar.gz 
root@master-1:/usr/local/src# ll
total 462524
drwxr-xr-x  3 root root       179 Apr 20 14:29 ./
drwxr-xr-x 10 root root       114 Aug 24  2021 ../
drwxr-xr-x 10 root root       176 Aug 12  2021 kubernetes/
-rw-r--r--  1 root root  29154332 Apr 15 07:49 kubernetes-client-linux-amd64.tar.gz
-rw-r--r--  1 root root 118151109 Apr 15 07:52 kubernetes-node-linux-amd64.tar.gz
-rw-r--r--  1 root root 325784034 Apr 15 08:38 kubernetes-server-linux-amd64.tar.gz
-rw-r--r--  1 root root    525314 Apr 15 06:25 kubernetes.tar.gz
root@master-1:/usr/local/src# ll kubernetes/cluster/addons/dns/coredns/
total 36
drwxr-xr-x 2 root root  147 Aug 12  2021 ./
drwxr-xr-x 5 root root   71 Aug 12  2021 ../
-rw-r--r-- 1 root root 1075 Aug 12  2021 Makefile
-rw-r--r-- 1 root root 4966 Aug 12  2021 coredns.yaml.base
-rw-r--r-- 1 root root 5016 Aug 12  2021 coredns.yaml.in
-rw-r--r-- 1 root root 5018 Aug 12  2021 coredns.yaml.sed
-rw-r--r-- 1 root root  344 Aug 12  2021 transforms2salt.sed
-rw-r--r-- 1 root root  287 Aug 12  2021 transforms2sed.sed

查看_DNS__DOMAIN_的值 此值为/etc/kubeasz/clusters/qijia01/hosts里面CLUSTER_DNS_DOMAIN的值

root@master-1:/etc/kubeasz/clusters/qijia01# grep -r "CLUSTER_DNS_DOMAIN" /etc/kubeasz/clusters/qijia01/hosts
CLUSTER_DNS_DOMAIN="qijia.local"                                           #创建service的后缀

查看一下coredns的地址

root@master-1:/etc/kubeasz/clusters/qijia01# kubectl exec -it net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# 
[root@net-test1 /]# 
[root@net-test1 /]# cat /etc/resolv.conf 
nameserver 10.100.0.2
search default.svc.qijia.local svc.qijia.local qijia.local
options ndots:5

替换coredns中的k8s.gcr.io/coredns/coredns:v1.8.0镜像为coredns/coredns:1.8.0 应为k8s.gcr.io/coredns/coredns:v1.8.0镜像是谷歌的在国外法拉取成功
完整的yaml文件,修改了dns_doman地址 dns_forward地址 coredns镜像 dns地址 内存大小

root@master-1:/data/yaml# cat coredns.yaml
# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
#将DNS_DOMAIN替换成你/etc/kubeasz/clusters/qijia01/hosts里面CLUSTER_DNS_DOMAIN的值即qijia.local
        kubernetes qijia.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
#forwardz转发配置 可以使本地额dns文件 也可以是公司bind或者互联网dns
        forward . 114.114.114.114 {
            max_concurrent 1000
        }
#dns缓存 解析coredns后 此解析关系会在coredns缓存30s
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.100.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

创建成功

root@master-1:/data/yaml# kubectl apply -f coredns.yaml 
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

验证域名解析成功

root@master-1:/etc/kubeasz/clusters/qijia01# kubectl exec -it net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# 
[root@net-test1 /]# ping www.baidu.com   
PING www.a.shifen.com (110.242.68.3) 56(84) bytes of data.
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=1 ttl=50 time=9.24 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=2 ttl=50 time=9.70 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=3 ttl=50 time=10.2 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.248/9.721/10.208/0.392 ms

实时修改coredns的副本数,把replicas修改为2

root@master-1:/data/yaml# kubectl edit deployments.apps coredns -n kube-system 

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"CoreDNS"},"name":"coredns","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"kube-dns"}},"strategy":{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"k8s-app":"kube-dns"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"k8s-app","operator":"In","values":["kube-dns"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"containers":[{"args":["-conf","/etc/coredns/Corefile"],"image":"coredns/coredns:1.8.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":5,"httpGet":{"path":"/health","port":8080,"scheme":"HTTP"},"initialDelaySeconds":60,"successThreshold":1,"timeoutSeconds":5},"name":"coredns","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"},{"containerPort":9153,"name":"metrics","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/ready","port":8181,"scheme":"HTTP"}},"resources":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["all"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/coredns","name":"config-volume","readOnly":true}]}],"dnsPolicy":"Default","nodeSelector":{"kubernetes.io/os":"linux"},"priorityClassName":"system-cluster-critical","securityContext":{"seccompProfile":{"type":"RuntimeDefault"}},"serviceAccountName":"coredns","tolerations":[{"key":"CriticalAddonsOnly","operator":"Exists"}],"volumes":[{"configMap":{"items":[{"key":"Corefile","path":"Corefile"}],"name":"coredns"},"name":"config-volume"}]}}}}
  creationTimestamp: "2022-04-20T07:16:54Z"
  generation: 1
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: CoreDNS
  name: coredns
  namespace: kube-system
  resourceVersion: "124175"
  uid: c1aeab8b-45c0-4453-a1db-6316580da9df
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kube-dns
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kube-dns
    spec:

可以看到第二个pod正在启动

image.png
root@master-1:/data/yaml# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-74c56d8f8d-5pmz7                   1/1     Running   0          55s
kube-system   coredns-74c56d8f8d-pxh44                   1/1     Running   0          16m

dns缓存级别

node级别 localdns-cache (需要每个node节点都缓存一份,node过多不建议使用)
pod级别 dnsmasq (需要每个pod都缓存一份,pod过多不建议使用)
coredns 延长coredns缓存的时间(最优但是需要消耗内存)

coredns中的一些优化

errors 错误信息标准输出 kubectl logs -f coredns 的信息
health 在


image.png

添加一个forword,假如我有一个域名为myserver.online是公司内部使用的测试域名,当k8s中的pod访问这个后缀的域名时 我希望他把解析请求交给内部的bind服务器(172.16.16.16:53)
在核心文件Corefile最下面加上forward

root@master-1:/data/yaml# vi coredns.yaml 
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes qijia.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . 114.114.114.114 {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
        myserver.online{
          forward . 172.16.16.16:53
        }
---

删掉再重新创建 也可以直接用apply

root@master-1:/data/yaml# kubectl delete -f coredns.yaml 
serviceaccount "coredns" deleted
clusterrole.rbac.authorization.k8s.io "system:coredns" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" deleted
configmap "coredns" deleted
deployment.apps "coredns" deleted
service "kube-dns" deleted
root@master-1:/data/yaml# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS        RESTARTS   AGE
default       net-test1                                  1/1     Running       0          22h
default       net-test2                                  1/1     Running       0          22h
kube-system   calico-kube-controllers-647f956d86-zrjq9   1/1     Running       0          22h
kube-system   calico-node-47phc                          1/1     Running       0          22h
kube-system   calico-node-9ghhw                          1/1     Running       0          22h
kube-system   calico-node-c7stp                          1/1     Running       0          22h
kube-system   calico-node-lcjsx                          1/1     Running       0          22h
kube-system   coredns-74c56d8f8d-s6zks                   1/1     Terminating   5          2m56s
root@master-1:/data/yaml# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
root@master-1:/data/yaml# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       net-test1                                  1/1     Running   0          22h
default       net-test2                                  1/1     Running   0          22h
kube-system   calico-kube-controllers-647f956d86-zrjq9   1/1     Running   0          22h
kube-system   calico-node-47phc                          1/1     Running   0          22h
kube-system   calico-node-9ghhw                          1/1     Running   0          22h
kube-system   calico-node-c7stp                          1/1     Running   0          22h
kube-system   calico-node-lcjsx                          1/1     Running   0          22h
kube-system   coredns-74c56d8f8d-vds9h                   1/1     Running   0          2s

使用nslookup测试dns解析,前提是你的 net-test1要有nslookup没有的话需要安装bind-utils

root@master-1:/data/yaml# kubectl exec net-test1 nslookup kubernetes.default
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:     10.100.0.2
Address:    10.100.0.2#53

Name:   kubernetes.default.svc.qijia.local
Address: 10.100.0.1

dashboard安装

先看一下k8s的版本,然后选择兼容版本的dashboard
https://github.com/kubernetes/dashboard/releases

root@master-1:/data/yaml# kubectl get node
NAME             STATUS                     ROLES    AGE   VERSION
192.168.10.101   Ready,SchedulingDisabled   master   23h   v1.21.0
192.168.10.102   Ready,SchedulingDisabled   master   23h   v1.21.0
192.168.10.104   Ready                      node     23h   v1.21.0
192.168.10.105   Ready                      node     23h   v1.21.0

dashboard-2.4.0支持k8s1.21


image.png

github上面提供了dashboard的镜像还有yaml文件


image.png

将dashboard的yaml文件下载下来并改名,便于以后知道此文件是做什么用的
root@master-1:/data/yaml# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
--2022-04-20 18:08:51--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7543 (7.4K) [text/plain]
Saving to: ‘recommended.yaml’

recommended.yaml                                                 100%[========================================================================================================================================================>]   7.37K  --.-KB/s    in 0.001s  

2022-04-20 18:08:51 (10.0 MB/s) - ‘recommended.yaml’ saved [7543/7543]

root@master-1:/data/yaml# ll
total 16
drwxr-xr-x 2 root root   50 Apr 20 18:08 ./
drwxr-xr-x 6 root root  114 Apr 20 14:45 ../
-rw-r--r-- 1 root root 5012 Apr 20 17:15 coredns.yaml
-rw-r--r-- 1 root root 7543 Apr 20 18:08 recommended.yaml
root@master-1:/data/yaml# mv recommended.yaml dashboard-2.4.0.yaml
root@master-1:/data/yaml# ll
total 16
drwxr-xr-x 2 root root   54 Apr 20 18:09 ./
drwxr-xr-x 6 root root  114 Apr 20 14:45 ../
-rw-r--r-- 1 root root 5012 Apr 20 17:15 coredns.yaml
-rw-r--r-- 1 root root 7543 Apr 20 18:08 dashboard-2.4.0.yaml

创建dashboard的pod

root@master-1:/data/yaml# kubectl apply -f dashboard-2.4.0.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

root@master-1:/data/yaml# kubectl get pod -A
NAMESPACE              NAME                                        READY   STATUS    RESTARTS   AGE
kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-5h8t7   1/1     Running   0          78s
kubernetes-dashboard   kubernetes-dashboard-576cb95f94-mzwpz       1/1     Running   0          78s

查看dashboard的service是否创建成功

root@master-1:/data/yaml# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.100.109.121   <none>        8000/TCP   4m4s
kubernetes-dashboard        ClusterIP   10.100.247.126   <none>        443/TCP    4m5s

我们发现kubernetes-dashboard这个service默认使用的是 ClusterIP ,并没有把443端口给映射到宿主机,所以此端口只能在集群内部使用,我们并没有办法通过浏览器访问
解决方法:在dashboard.yaml中找到kubernetes-dashboard 这个service的配置信息并将service的类型改为nodeport,默认是ClusterIP

root@master-1:/data/yaml# vi dashboard-2.4.0.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #定义类型为NodePort
  ports:
    - port: 443              #service的端口
      targetPort: 8443   #将service的443端口映射到pod的8443
      nodePort: 30004  #宿主机暴露的端口,此端口不可以随便写,需要和你在/etc/kubeasz/clusters/qijia01/hosts文件中定义的NODE_PORT_RANGE范围一致
  selector:
    k8s-app: kubernetes-dashboard

---

root@master-1:/data/yaml# grep   "NODE_PORT_RANGE" /etc/kubeasz/clusters/qijia01/hosts
NODE_PORT_RANGE="30000-50000"                                              #启动pod暴露的端口,可以自定义

重新部署后发现service已经把443端口暴露出来了

root@master-1:/data/yaml# kubectl apply -f dashboard-2.4.0.yaml 
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
root@master-1:/data/yaml# kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.100.0.1       <none>        443/TCP                  24h
kube-system            kube-dns                    ClusterIP   10.100.0.2       <none>        53/UDP,53/TCP,9153/TCP   96m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.100.109.121   <none>        8000/TCP                 35m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.100.247.126   <none>        443:30004/TCP            35m

3004会监听在任意一个宿主机上面(master和node) 因为这些宿主机都是通过apiserver从etcd统一拿取的数据

root@node-1:~# netstat -lntp|grep 30004
tcp        0      0 0.0.0.0:30004           0.0.0.0:*               LISTEN      3157111/kube-proxy  

此时我们可以通过任意一个master或者node的ip加上30004通过https协议来访问dashboard


image.png

此时我们需要创建一个登录的账号并且授予权限

root@master-1:/data/yaml# kubectl apply  -f admin-user.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

root@master-1:/data/yaml# cat admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

查看新建用户的token

root@master-1:/data/yaml# kubectl get secrets -n kubernetes-dashboard 
NAME                               TYPE                                  DATA   AGE
admin-user-token-rchxk             kubernetes.io/service-account-token   3      92s
root@master-1:/data/yaml# kubectl describe secrets admin-user-token-rchxk -n kubernetes-dashboard 
Name:         admin-user-token-rchxk
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 5a7d14f3-32cc-4c2f-8753-d4051a9f73ba

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlBnNUJOQkNyaWt4X1B3TDFZVlpYWC1rMzV0U0ZxNm9mQ3o5RVdYWkJGR00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOqijiaiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXJjaHhrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmqijiaFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1YTdkMTRmMy0zMmNjLTRjMmYtODc1My1kNDA1MWE5ZjczYmEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.wxzfqs4hfngnqdOZO2QG_PXPln_3yndM8Ax5TIFKZrkI3TekZBWzKZaH-obwd8HXX-S-18YH7NufKy3jdtWiVQwxbX7cXMNFy9n8VhDxMQjZqwWMsj7nJJ54w-q_xUUEColrWL4TfwyDUgstI6VQzLxb6EmNABDNQ4rx-enILmx3LFh92vm3eBCOYP-OD4yqJodgwnXoavIM2kgUEYH0hckHKuYaBhSyTlFp1zna0MW9F143guv2XYbVehVnZvSIFq833QW0xD13AKfXrmlMUyTSqSfTOUuwTd-VKEbsr_VUOXklCKtN0C2Cyph0k5WByYJJ5gJtwdXEHaM4R33LAQqijia

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,098评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,213评论 2 380
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 149,960评论 0 336
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,519评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,512评论 5 364
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,533评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,914评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,574评论 0 256
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,804评论 1 296
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,563评论 2 319
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,644评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,350评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,933评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,908评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,146评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,847评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,361评论 2 342

推荐阅读更多精彩内容