在Kubernetes等平台上进行任何大型部署时,日志记录都是主要的挑战之一,但是配置和维护用于日志收集的中央存储库可以简化日常操作。为此,Fluentd、Elasticsearch和Kibana的组合可以在Kubernetes集群上创建一个强大的日志记录层。
1. 部署es集群
由于前面部署的应用大部分都部署到k8s中的,在将es部署在k8s集群中电脑有点扛不住了,所以这里单独起三台虚拟机来部署es集群(一般也是单独部署)。
- 准备
vm配置:1核 1G 20G硬盘
-
网络:
192.168.241.150 es-node1 192.168.241.151 es-node2 192.168.241.152 es-node3
-
创建一个新的用户,es不能使用root用户启动
adduser hadoop
- 基础配置(Linux限制设置)
-
修改
vm.max_map_count参数
sudo vi /etc/sysctl.conf # elasticsearch config start vm.max_map_count=262144 # elasticsearch config end
-
修改文件限制和最大线程数限制
sudo vi /etc/security/limits.conf # elasticsearch config start * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096 # elasticsearch config end
-
- 下载安装es
-
下载解压
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz sudo tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz -C /opt/
-
配置es
vi config/elasticsearch.yml # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: 设置集群的名称 # cluster.name: cluster # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: 设置当前节点的名称,其他节点需要修改 # node.name: node-1 # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): 设置数据目录 # path.data: /opt/elasticsearch/data # # Path to log files: 设置日志目录 # path.logs: /opt/elasticsearch/logs # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): 设置暴露的IP地址,本机IP地址,其他节点需要修改 # network.host: 192.168.241.150 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: 设置集群中的其它节点(想对于当前节点而言),单播列表,并不需要配置集群所有节点 # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["es-node2", "es-node3"] # # Bootstrap the cluster using an initial set of master-eligible nodes: 候选master节点,需要和node.name属性相同 # cluster.initial_master_nodes: ["node-1", "node-2","node-3"] # # For more information, consult the discovery and cluster formation module documentation.
-
配置JVM参数
vi config/jvm.options # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space # Set Xmx and Xms to no more than 50% of your physical RAM. -Xms512m -Xmx512m
-
启动
bin/elasticsearch -d # 后台启动 # 记录PID并且后台启动 ./bin/elasticsearch -p /tmp/elasticsearch-pid -d
-
编写启动脚本,方便使用
#!/bin/bash ES_HOME=/opt/elasticsearch ES_PID_FILE=/tmp/elasticsearch-pid action=$1 case $action in 'start') if [ -e $ES_PID_FILE ] then echo 'es already started!' else echo 'starting es begin......' $ES_HOME/bin/elasticsearch -p /tmp/elasticsearch-pid -d sleep 2 echo 'start es success.......' fi ;; 'stop') if [ -e $ES_PID_FILE ] then echo 'stopping es begin......' es_pid=`cat /tmp/elasticsearch-pid` kill -9 $es_pid sleep 3 rm /tmp/elasticsearch-pid echo 'stop es success........' else echo "es dosen't start!!" fi ;; *) echo 'not support!!' ;; esac
-
2. 部署kibana
-
下载解压
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz sudo tar -zxvf kibana-7.2.0-linux-x86_64.tar.gz -C /opt/
-
配置
vi kibana/config/kibana.yml # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "192.168.241.150" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://es-node1:9200","http://es-node2:9200","http://es-node3:9200"]
-
启动
./bin/kibana
-
停止
# 先获取PID ps -ef | grep java # 暂停 kill -9 PID
-
3. 再k8s中部署fluentd
-
关于k8s中日志采集的方案可以参照官方文档说明,这里采用node loggin agent的方式来处理。
-
Fulentd说明
-
对容器化的应用日志处理建议
- The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
- Logs should have a separate storage and lifecycle independent of nodes, pods, or containers.
-
日志采集流程
in_tail -> filter_grep -> out_stdout
-
Event数据结构说明:
- tag:消息从哪里来的
- time:时间
- record:log的内容
-
-
下载
git clone https://github.com/fluent/fluentd-kubernetes-daemonset
-
修改es的链接配置,修改fluentd-kubernetes-daemonset文件夹中的
fluentd-daemonset-elasticsearch-rbac.yaml
的deploymentspec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: serviceAccount: fluentd serviceAccountName: fluentd tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST value: "192.168.241.150" # 修改为es的主机名或者IP地址 - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" # X-Pack Authentication # ===================== - name: FLUENT_ELASTICSEARCH_USER value: "elastic" - name: FLUENT_ELASTICSEARCH_PASSWORD value: "changeme"
-
安装
kubectl apply -f fluentd-daemonset-elasticsearch-rbac.yaml
4.通过kibanna创建index pattern对采集到的日志进行索引
-
登陆kibana--->management--->Index Patterns点击创建
-
查看