rancher应用商店部署的EFK开启用户认证(k8s环境)

前言:

​ 目前项目上了rancher的K8S,rancher自带的应用商店可以一键部署EFK集群,但是生产环境有安全性的需求,这里需要对这个EFK集群进行改造,增加用户名密码的验证登陆.

1.efk基础设置

这里采用的是rancher自带的应用商店里的efk,并自定义了镜像地址(harbor转储)

所有镜像均取自elastic的官方源,镜像版本均为7.7.1:

镜像下载地址:https://www.docker.elastic.co/

由于日志数据不太重要,就没有选择持久化数据,这样性能也会相对好一点,缺点是如果重新部署,elasticsearch的数据都会清空。目前rancher自己的分布式存储longhorn也正式发布了,配置也简单,有条件的可以考虑将数据存放到分布式存储上.

2.配置信息变更

2.1 elasticsearch 的StatefulSet配置变更:

变更的参数:

env: ES_JAVA_OPTS跟认证无关,默认配置资源太少,容易oom;ELASTIC_USERNAME,ELASTIC_PASSWORD是为了elasticsearch集群的状态检测准备的

        - name: ES_JAVA_OPTS
          value: -Xmx4g -Xms4g
        - name: xpack.security.enabled
          value: "true"
        - name: ELASTIC_USERNAME
          value: elastic
        - name: ELASTIC_PASSWORD
          value: elasticpassword

resource:跟开启用户认证无关,默认配置资源太少,容易oom

        resources:
          limits:
            cpu: "4"
            memory: 8Gi
          requests:
            cpu: 100m
            memory: 8Gi

附上rancher上完整的yaml文件:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    esMajorVersion: "7"
    field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"port":80,"protocol":"HTTP","serviceName":"efk:elasticsearch-master-headless","ingressName":"efk:elastic-ingress","hostname":"elastic-prod.hlet.com","allNodes":true}]'
  creationTimestamp: "2020-06-03T08:34:13Z"
  generation: 4
  labels:
    app: elasticsearch-master
    chart: elasticsearch-7.3.0
    heritage: Tiller
    io.cattle.field/appId: efk
    release: efk
  name: elasticsearch-master
  namespace: efk
  resourceVersion: "22963322"
  selfLink: /apis/apps/v1/namespaces/efk/statefulsets/elasticsearch-master
  uid: 03f40362-4e89-4bd1-b8d3-285a36cbce35
spec:
  podManagementPolicy: Parallel
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: elasticsearch-master
  serviceName: elasticsearch-master-headless
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: elasticsearch-master
        chart: elasticsearch-7.3.0
        heritage: Tiller
        release: efk
      name: elasticsearch-master
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - elasticsearch-master
            topologyKey: kubernetes.io/hostname
      containers:
      - env:
        - name: node.name
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: cluster.initial_master_nodes
          value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,elasticsearch-master-3,elasticsearch-master-4,
        - name: discovery.seed_hosts
          value: elasticsearch-master-headless
        - name: cluster.name
          value: elasticsearch
        - name: network.host
          value: 0.0.0.0
        - name: ES_JAVA_OPTS
          value: -Xmx4g -Xms4g
        - name: node.data
          value: "true"
        - name: node.ingest
          value: "true"
        - name: node.master
          value: "true"
        - name: xpack.security.enabled
          value: "true"
        - name: ELASTIC_USERNAME
          value: elastic
        - name: ELASTIC_PASSWORD
          value: elasticpassword
        image: 10.1.99.42/ranchercharts/elasticsearch-elasticsearch:7.7.1
        imagePullPolicy: IfNotPresent
        name: elasticsearch
        ports:
        - containerPort: 9200
          name: http
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              # If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
              # Once it has started only check that the node itself is responding
              START_FILE=/tmp/.es_start_file

              http () {
                  local path="${1}"
                  if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                    BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                  else
                    BASIC_AUTH=''
                  fi
                  curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
              }

              if [ -f "${START_FILE}" ]; then
                  echo 'Elasticsearch is already running, lets check the node is healthy'
                  http "/"
              else
                  echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
                  if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                      touch ${START_FILE}
                      exit 0
                  else
                      echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                      exit 1
                  fi
              fi
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
        resources:
          limits:
            cpu: "4"
            memory: 8Gi
          requests:
            cpu: 100m
            memory: 8Gi
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sysctl
        - -w
        - vm.max_map_count=262144
        image: 10.1.99.42/ranchercharts/elasticsearch-elasticsearch:7.7.1
        imagePullPolicy: IfNotPresent
        name: configure-sysctl
        resources: {}
        securityContext:
          privileged: true
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
      terminationGracePeriodSeconds: 120
  updateStrategy:
    type: RollingUpdate
status:
  collisionCount: 0
  currentReplicas: 5
  currentRevision: elasticsearch-master-85f58497dd
  observedGeneration: 4
  readyReplicas: 5
  replicas: 5
  updateRevision: elasticsearch-master-85f58497dd
  updatedReplicas: 5

配置完后点击保存,elasticsearch集群会自动重新部署

注意:如果集群一直不能初始化完成,建议一次性删除所有elastic节点,让节点完全重新初始化


待重新部署完成后,我们需要初始化一下elastic内置的账户密码:

登陆任意一台elastic,执行命令:

elasticsearch-setup-passwords interactive

至此,elasticsearch集群初始化完成

2.2 kibana 配置变更

因为是使用的应用商店自动部署的,所以会自动生成两个service,分别是efk-kibana和kibana-http,

在实际配置中,将service应用到ingress的时候,出现了无法访问的问题,具体的问题是在kibana本地访问http://0.0.0.0:5601 是可以访问的,但是使用http://efk-kibana:5601 访问就不通,后来就重新加了一个efk-kibana-headless的无头服务,并应用至kibana的ingress配置上去就好了。后来晚些时候service自己又恢复正常了。。。

[root@hlet-prod-k8s-rancher ~]# kubectl get svc -n efk
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
efk-kibana                      ClusterIP   10.43.127.11    <none>        5601/TCP            17d
efk-kibana-headless             ClusterIP   None            <none>        5601/TCP            130m
elasticsearch-apm               ClusterIP   10.43.238.31    <none>        8200/TCP            52d
elasticsearch-heartbeat         ClusterIP   10.43.172.214   <none>        9200/TCP            2d
elasticsearch-master            ClusterIP   10.43.21.168    <none>        9200/TCP,9300/TCP   17d
elasticsearch-master-headless   ClusterIP   None            <none>        9200/TCP,9300/TCP   17d
kibana-http                     ClusterIP   10.43.71.157    <none>        80/TCP              174m

ingress配置:


image-20200612173912785.png

svc配置自带的就不贴了

kibana的yaml主要修改了两块:

ENV:两组用户名密码分别是连接elastic集群的用户名密码和存活检测脚本调用

        - name: xpack.security.enabled
          value: "true"
        - name: ELASTICSEARCH_USERNAME
          value: kibana
        - name: ELASTIC_USERNAME
          value: kibana
        - name: ELASTICSEARCH_PASSWORD
          value: elasticpassword
        - name: ELASTIC_PASSWORD
          value: elasticpassword

存活检测:就改了最后一行,默认的地址在开启认证后没有登陆会一直报404

        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              http () {
                  local path="${1}"
                  set -- -XGET -s --fail

                  if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                    set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                  fi

                  curl -k "$@" "http://localhost:5601${path}"
              }

              http "/login"

附上完整的Deployment的yaml配置:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "23"
    field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"port":80,"protocol":"HTTP","serviceName":"efk:kibana-http","ingressName":"efk:kibana-ingress","hostname":"kibana-prod.hlet.com","allNodes":true}]'
  creationTimestamp: "2020-05-26T00:53:53Z"
  generation: 49
  labels:
    app: kibana
    io.cattle.field/appId: efk
    release: efk
  name: efk-kibana
  namespace: efk
  resourceVersion: "23026049"
  selfLink: /apis/apps/v1/namespaces/efk/deployments/efk-kibana
  uid: 85017148-3738-46f9-8e29-65d072549a92
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kibana
      release: efk
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2020-06-09T00:17:32Z"
        field.cattle.io/ports: '[[{"containerPort":80,"dnsName":"efk-kibana","kind":"ClusterIP","name":"http","protocol":"TCP"}],[{"containerPort":5601,"dnsName":"efk-kibana","kind":"ClusterIP","name":"5601tcp2","protocol":"TCP"}]]'
        field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"allNodes":true,"hostname":"kibana-prod.hlet.com","ingressId":"efk:kibana-ingress","port":80,"protocol":"HTTP","serviceId":"efk:kibana-http"}]'
      creationTimestamp: null
      labels:
        app: kibana
        release: efk
    spec:
      containers:
      - args:
        - nginx
        - -g
        - daemon off;
        - -c
        - /nginx/nginx.conf
        image: rancher/nginx:1.15.8-alpine
        imagePullPolicy: IfNotPresent
        name: kibana-proxy
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /nginx/
          name: kibana-nginx
      - env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch-master:9200
        - name: I18N_LOCALE
          value: zh-CN
        - name: LOGGING_QUIET
          value: "true"
        - name: SERVER_HOST
          value: 0.0.0.0
        - name: xpack.security.enabled
          value: "true"
        - name: ELASTICSEARCH_USERNAME
          value: kibana
        - name: ELASTIC_USERNAME
          value: kibana
        - name: ELASTICSEARCH_PASSWORD
          value: elasticpassword
        - name: ELASTIC_PASSWORD
          value: elasticpassword
        image: 10.1.99.42/ranchercharts/kibana-kibana:7.7.1
        imagePullPolicy: IfNotPresent
        name: kibana
        ports:
        - containerPort: 5601
          name: 5601tcp2
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              http () {
                  local path="${1}"
                  set -- -XGET -s --fail

                  if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                    set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                  fi

                  curl -k "$@" "http://localhost:5601${path}"
              }

              http "/login"
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 500m
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: nginx.conf
            mode: 438
            path: nginx.conf
          name: efk-kibana-nginx
        name: kibana-nginx
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2020-06-12T07:46:09Z"
    lastUpdateTime: "2020-06-12T07:46:09Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2020-06-12T07:29:26Z"
    lastUpdateTime: "2020-06-12T07:46:09Z"
    message: ReplicaSet "efk-kibana-9884bd66b" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 49
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

到这里就可以尝试登陆kibana了登陆界面:


image-20200721112715807.png

2.3 apm 配置变更

由于我们elastic的组件还使用到了apm,继续修改apm相关设置

原始部署相关步骤:

apm是不包含在应用商店中的,部署相关yaml:

部署顺序:

kubectl create configmap elasticsearch-apm --from-file=apm-server.docker.yml -n efk 
kubectl apply -f elasticsearch-apm-server.yaml 

apm-server.docker.yml:

apm-server:
  host: "0.0.0.0:8200"
  kibana.enabled: true
  kibana.host: "efk-kibana:5601"
  kibana.protocol: "http"
logging.level: warning
output.elasticsearch:
  hosts: ["elasticsearch-master-headless:9200"]

apm.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  labels:
    app: elasticsearch-apm
  name: elasticsearch-apm
  namespace: efk
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: elasticsearch-apm
  template:
    metadata:
      labels:
        app: elasticsearch-apm
    spec:
      containers:
      - image: 10.1.99.42/docker.elastic.co/apm/apm-server:7.7.1
        imagePullPolicy: IfNotPresent
        name: elasticsearch-apm
        ports:
        - containerPort: 8200
          protocol: TCP
        resources:
          limits:
            cpu: "1"
          requests:
            cpu: 25m
            memory: 512Mi
        volumeMounts:
        - mountPath: /usr/share/apm-server/apm-server.yml
          name: config
          subPath: apm-server.docker.yml
      volumes:
      - configMap:
          defaultMode: 420
          name: elasticsearch-apm
        name: config
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch-apm
  name: elasticsearch-apm
  namespace: efk
spec:
  ports:
  - name: elasticsearch-apm
    port: 8200
    protocol: TCP
  selector:
    app: elasticsearch-apm

修改配置文件,适配用户认证

修改elasticsearch-apm这个configmap

apm-server.docker.yml

apm-server:
  host: "0.0.0.0:8200"
  kibana.enabled: true
  kibana.host: "efk-kibana-headless:5601"
  kibana.username: "elastic"
  kibana.password: "elasticpassword"
  kibana.protocol: "http"
logging.level: warning
#logging.level: info
output.elasticsearch:
  hosts: ["elasticsearch-master-headless:9200"]
  username: "elastic"
  password: "elasticpassword"

修改完成后,重新部署一下即可。

2.4 filebeat 配置变更

应用商店自带的,直接修改相应的configmap即可

修改efk-filebeat-config这个configmap

filebeat.yml:

filebeat.inputs:
- type: docker
  containers.ids:
  - '*'
  processors:
  - add_kubernetes_metadata:
      in_cluster: true

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  username: "elastic"
  password: "elasticpassword"
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,684评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,143评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,214评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,788评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,796评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,665评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,027评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,679评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,346评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,664评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,766评论 1 331
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,412评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,015评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,974评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,073评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,501评论 2 343