ELK(es,filebeat,kibana,logstash,redis,zookeeper,kafka)部署日志收集(,nginx,tomcat,docker)
收集 代理 nginx haproxy
web nginx tomcat
数据库 mysql redis mongo elasticsearch
操作系统 source message
1、搭建环境:
es安装单节点就可以了,这里把前面配置的集群取消
192.168.208.120: elastic , kibana , filebeat nginx zookeeper kafka/redis logstash
192.168.208.121: nginx filebeat, tomcat zookeeper kafka
192.168.208.122: nginx filebeat, tomcat zookeeper kafka
192.168.208.120:
es 单节点配置文件:
[root@elk-server config]# egrep -v "^#|^$" elasticsearch.yml
node.name: node-1
path.data: /usr/local/data
path.logs: /usr/local/logs
network.host: 192.168.208.120
http.port: 9200
discovery.seed_hosts: ["192.168.208.120"]
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
[root@elk-server config]# systemctl restart elasticsearch
安装 kibana
[root@elk-server tool]# rpm -ihv kibana-6.6.0-x86_64.rpm
[root@elk-server tool]# rpm -qc kibana
/etc/kibana/kibana.yml
kibana配置:
[root@elk-server tool]# vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.208.120"
server.name: "elk-server"
elasticsearch.hosts: ["http://192.168.208.120:9200"]
kibana.index: ".kibana"
启动 kibana
[root@elk-server tool]# systemctl restart elasticsearch
[root@elk-server tool]# systemctl start kibana
[root@elk-server tool]# netstat -luntp |grep 9200
tcp6 0 0 192.168.208.120:9200 :::* LISTEN 22217/java
[root@elk-server tool]# netstat -luntp |grep 5601
tcp 0 0 192.168.208.120:5601 0.0.0.0:* LISTEN 22569/node
测试访问:
http://192.168.208.120:5601
2、安装 nginx
收集节点 192.168.208.120/121
配置nginx 下载源
[root@node1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enable=0
gpgkey=https://nginx.org/keys/nginx_signing.key
安装nginx
[root@node1 ~]# yum install nginx -y
[root@node1 ~]# systemctl start nginx
[root@node1 ~]# netstat -luntp
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 42966/nginx: master
压力测试:
[root@node1 ~]# ab -c 10 -n 1000 192.168.208.121/
3、安装配置filebeat
收集节点 192.168.208.120/121
[root@elk-server tool]# rpm -ivh filebeat-6.6.0-x86_64.rpm
备份 filebeat 配置文件
[root@elk-server tool]# cp /etc/filebeat/filebeat.yml /tool/
修改 filebeat.yml 配置文件
[root@elk-server tool]# egrep -v "#|^$" /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
启动 filebeat
[root@elk-server tool]# systemctl restart filebeat
[root@elk-server tool]# tail -f /var/log/filebeat/filebeat # 查看日志
查看上传到 es 的日志 默认写入的索引就是 filebeat
4、kibana配置
Index Patterns 配置 el 索引数据
将 下面显示的 filebeat-6.6.0-2020.04.05 索引 复制到 index pattern
等待 进入到下一步 选择 @timestamp 选择创建
进入到 索引界面, 可以看到把 es 里的索引添加到kibana 上了 然后进入 Ddiscover
点击右上角的 最新的 15分钟 选择 新近 1小时 或者是 4小时, 就可以看到图了
然后选择左边 把 message 点击 add 添加要看的信息 如下显示
点击 可以进行 全文检索 查询状 态码 和显示的条数
检查 条件筛查
==================================================================================
收集 nginx的json日志
日志格式配置如下:
log_format main '{ "time_local": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"upstream_time": "$upstream_response_time",'
'"request_time": "$request_time"'
''}'
将以上日志格式放入到 nginx.conf 中
1、定时 log_format json 日志格式
2、在 access_log /var/log/nginx/access.log json; 引用 json 格式日志
3、清空 /var/log/nginx.log 历史日志
4、修改 /etc/filebeat/filebeat.yml 配置文件 设置为 json 格式解析
5、重启 nginx
[root@elk-server tool]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{ "time_local": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"upstream_time": "$upstream_response_time",'
'"request_time": "$request_time"'
'}';
access_log /var/log/nginx/access.log json;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
检查是否已成为 json类似字典
[root@elk-server nginx]# tail -f /var/log/nginx/access.log
{ "time_local": "05/Apr/2020:19:35:37 +0800", "remote_addr": "192.168.208.1", "referer": "-", "request": "GET / HTTP/1.1", "status": 304, "bytes": 0, "agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000"}
{ "time_local": "05/Apr/2020:19:35:37 +0800", "remote_addr": "192.168.208.1", "referer": "-", "request": "GET / HTTP/1.1", "status": 304, "bytes": 0, "agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000"}
修改 filebeat.yml
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
然后 将索引干掉,重启 kibana
重启 nginx 和 filebeat
[root@elk-server nginx]# echo "" >/var/log/nginx/access.log
[root@elk-server nginx]# systemctl restart nginx
[root@elk-server nginx]# systemctl restart filebeat
[root@elk-server nginx]# systemctl restart kibana
然后重新在 kibana 上添加索引数据
filebeat:
停掉了 filebeat 会记录一下上一次读到什么位置
写入100条信息后
再启动 filebeat 会从上一次记录的位置开始读取数据
自定义 index 索引 替代 默认的索引插件 es数据表中
[root@elk-server tool]# egrep -v "#|^$" /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
setup.kibana:
host: "192.168.208.120:5601"
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
重启 filebeat , 重启后这里会报错
[root@elk-server nginx]# systemctl restart filebeat
解决方法:
在配置文件最后面增加 文件一定要注意缩进, 要不然很难发现错误 导致 启动不了的
使用 nginx的模块索引
setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
此时再重新访问 nginx , 重启配置 kibana
[root@elk-server tool]#systemctl restart filebeat
就可以发现 索引少了很多没用的了
多日志收集分析配置
注释: 多个日志文件收集,最好使用 tags 收打标签
语法
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
...
tags: ["access"]
在写入 elastic 时使用判断 tags 标签 用来写入某个索引
indices:
- index: "nginx-access%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
tags: "error"
[root@elk-server filebeat]# ls /var/log/nginx/ access.log error.log
此配置可以在多节点配置收集日志
[root@elk-server filebeat]# pwd
/etc/filebeat
[root@elk-server filebeat]# vim filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
json.keys_under_root: true
json.overwrite_keys: true
setup.kibana:
host: "192.168.208.120:5601"
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
#index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
indices:
- index: "nginx-access%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "error"
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
改后重新启动
[root@elk-server tool]# systemctl restart filebeat
此时也可以在 插件上查看
在 kibana 上重新添加
收集 tomcat日志
192.168.208.121
安装tomcat yum安装
yum install tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp tomcat-javadoc -y
启动 tomcat
[root@node1 /]# systemctl start tomcat
[root@node1 /]# netstat -luntp |grep 8080
tcp6 0 0 :::8080 :::* LISTEN 18009/java
访问 tomcat 192.168.208.121:8080
修改日志 为 json格式
修改 /etc/tomcat/server.xml
将 139行的 pattern="%h %l %u %t "%r" %s %b" /> 替换成下面的
[root@node1 /]#vim /etc/tomcat/server.xml
pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
[root@node1 /]# systemctl restart tomcat
修改 filebeat.yml 配置
[root@node1 /]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
############################ nginx
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
json.keys_under_root: true
json.overwrite_keys: true
############################ tomcat
- type: log
enabled: true
paths:
- /var/log/tomcat/localhost_access_log.*.txt
tags: ["tomcat"]
json.keys_under_root: true
json.overwrite_keys: true
setup.kibana:
host: "192.168.208.120:5601"
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
#index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
indices:
- index: "nginx-access%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "error"
- index: "tomcat-accessr%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "tomcat"
重启 filebeat 服务
[root@node1 /]# systemctl restart filebeat
收集java日志
java 日志处理
日志内容 多行的处理如下, 添加如下参数
192.168.208.120: elastic日志
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
日志如下:
[root@elk-server tool]# cat /var/log/elasticsearch/elasticsearch.log
[2020-04-06T00:12:55,740][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][27738] overhead, spent [350ms] collecting in the last [1s]
[2020-04-06T00:22:10,191][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [tomcat-accessr6.6.0-2020.04] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2020-04-06T00:22:10,761][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [tomcat-accessr6.6.0-2020.04/nA93Krr4RtyjKnoayAZAUg] create_mapping [doc]
[2020-04-06T00:22:10,868][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [tomcat-accessr6.6.0-2020.04/nA93Krr4RtyjKnoayAZAUg] update_mapping [doc]
[2020-04-06T00:22:14,001][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][28292][535] duration [2.6s], collections [1]/[3.5s], total [2.6s]/[34s], memory [237.2mb]->[220mb]/[503.6mb], all_pools {[young] [47.2mb]->[21.4mb]/[66.5mb]}{[survivor] [378.5kb]->[8.3mb]/[8.3mb]}{[old] [189.5mb]->[190.4mb]/[428.8mb]}
配置文件写如下进行多行匹配:
- type: log
enabled: true
paths:
- /var/log/elasticsearch/elasticsearch.log
tags: ["elastic_java"]
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
setup.kibana:
host: "192.168.208.120:5601"
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
#index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
indices:
- index: "elastic-java%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "elastic_java"
收集docker日志
这个日志收集参考 老师给的文档
前台查看 nginx日志
docker logs -f nginx (容器名)
收集容器需要根据容器运行的服务 来区分运行的服务日志
需要使用工具 docker-compose
https://github.com/docker/compose/releases/tag/1.25.5-rc1
https://github.com/docker/compose/releases/
安装
[root@dorcer01 /]# yum install -y python2-pip
[root@dorcer01 /]# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pip -U
[root@dorcer01 /]# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
[root@dorcer01 /]# pip install docker-compose
检查版本:
[root@dorcer01 /]#docker-compose version
编写 docker-compose
[root@dorcer01 /] cat docker-compose.yml
version: '3' #compose版本号
services: #容器服务组
nginx: #nginx服务
image: nginx:v2 #启动的镜像名称
# 设置labels
labels:
service: nginx
# logging设置增加labels.service
logging: # 写入日志操作
options:
labels: "service"
ports:
- "8080:80"
db:
image: nginx:latest
# 设置labels
labels:
service: db
# logging设置增加labels.service
logging:
options:
labels: "service"
ports:
- "80:80"
清理镜像
[root@dorcer01 /]docker ps -a|awk 'NR>1{print "docker rm",$1}'|bash
运行 docker-compose.yml
docker-compose up -d
配置 filebeat的 配置文件
[root@dorcer01 /] cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/lib/docker/containers/*/*-json.log # docker所有日志文件
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["192.168.47.175:9200"]
indices:
- index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
attrs.service: "nginx" # docker-compose 上设置的服务标签
stream: "stdout" # docke生成的日志中标准输出 上设置的服务标签
- index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
attrs.service: "nginx"
stream: "stderr"
- index: "docker-db-access-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
attrs.service: "db"
stream: "stdout"
- index: "docker-db-error-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
attrs.service: "db"
stream: "stderr"
setup.template.name: "docker"
setup.template.pattern: "docker-*"
setup.template.enabled: false
setup.template.overwrite: true
重启 filebeat
[root@dorcer01 /]systemctl restart filebeat
通过filebeat的module模块收集日志
通过 filebeat 的 module 模块进行配置 收集 nginx 普通日志
首先查看 filebeat配置文件
[root@node1 modules.d]# rpm -qc filebeat
/etc/filebeat/filebeat.yml
/etc/filebeat/modules.d/apache2.yml.disabled
/etc/filebeat/modules.d/auditd.yml.disabled
/etc/filebeat/modules.d/elasticsearch.yml.disabled
/etc/filebeat/modules.d/haproxy.yml.disabled
/etc/filebeat/modules.d/icinga.yml.disabled
/etc/filebeat/modules.d/iis.yml.disabled
/etc/filebeat/modules.d/kafka.yml.disabled
/etc/filebeat/modules.d/kibana.yml.disabled
/etc/filebeat/modules.d/logstash.yml.disabled
/etc/filebeat/modules.d/mongodb.yml.disabled
/etc/filebeat/modules.d/mysql.yml.disabled
/etc/filebeat/modules.d/nginx.yml.disabled
/etc/filebeat/modules.d/osquery.yml.disabled
/etc/filebeat/modules.d/postgresql.yml.disabled
/etc/filebeat/modules.d/redis.yml.disabled
/etc/filebeat/modules.d/suricata.yml.disabled
/etc/filebeat/modules.d/system.yml.disabled
/etc/filebeat/modules.d/traefik.yml.disabled
1、在 filebeat 配置文件中启动配置模块路径:
[root@elk-server filebeat]# cat filebeat.yml
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
reload.period: 10s
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
2、激活 软件
[root@elk-server filebeat]# filebeat modules enable nginx
Module nginx is already enabled
[root@elk-server filebeat]# filebeat modules list
Enabled:
nginx
Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
osquery
postgresql
redis
suricata
system
traefik
3、修改 nginx 配置文件 的 为普通日志文件格式 main
access_log /var/log/nginx/access.log main;
[root@elk-server filebeat]# systemctl restart nginx
4、修改 filebeat启动的 nginx模块配置文件
[root@elk-server filebeat]# vim /etc/filebeat/modules.d/nginx.yml
- module: nginx
# Access logs
access:
▽ enabled: true
var.paths: ["/var/log/nginx/access.log"]
# Error logs
error:
enabled: true
var.paths: ["/var/log/nginx/error.log"]
5、重启 filebeat 报如下错误信息
[root@elk-server filebeat]# systemctl restart filebeat
[root@elk-server filebeat]# tail -f /var/log/filebeat/filebeat
2020-04-08T20:55:14.205+0800 ERROR fileset/factory.go:142 Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
sudo bin/elasticsearch-plugin install ingest-user-agent
sudo bin/elasticsearch-plugin install ingest-geoip
按错误提示安装 ingest-user-agent ingest-geoip
[root@elk-server /]# find / -name 'elasticsearch-plugin'
/usr/share/elasticsearch/bin/elasticsearch-plugin
/usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent
/usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
6、修改配置 filebeat 配置文件
[root@elk-server ~]# cat /etc/filebeat/filebeat.yml
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
reload.period: 10s
setup.kibana:
host: "192.168.208.120:5601"
output.elasticsearch:
hosts: ["192.168.208.120:9200"]
indices:
- index: "nginx_access-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
fileset.name: "access"
- index: "nginx_error-%{[beat.version]}-%{+yyyy.MM.dd}"
when.contains:
fileset.name: "error"
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
7、重启 filebeat 和 es
[root@elk-server ~]# systemctl restart filebeat
[root@elk-server ~]# systemctl restart elasticsearch
在kibana上 添加错误日志是要使用 read_timestamp
ELK-kibana画图
柱状图、拆线图、饼图、仪表图、大屏拼接展示
官文文档
https://www.elastic.co/guide/en/beats/filebeat/6.6/configuration-filebeat-modules.html
1、nginx画图配置
1、备份kibana, 拷贝 filebeat下的 kibana 到 /root/ 下
[root@elk-server kibana]# cp -a /usr/share/filebeat/kibana /root
[root@elk-server ~]cd kibana/6/dashboard
[root@elk-server dashboard]# find . -type f ! -name "*nginx*" |xargs rm -rf
2、替换 filebeat开头的 替换成 nginx
[root@elk-server dashboard]# sed -i 's#filebeat\-\*#nginx\-\*#g' Filebeat-nginx-overview.json
[root@elk-server dashboard]# sed -i 's#filebeat\-\*#nginx\-\*#g' Filebeat-nginx-logs.json
3、返回上级 把 index-pattern的也改了
[root@elk-server 6]# cd index-pattern/
[root@elk-server index-pattern]# ls
filebeat.json
[root@elk-server index-pattern]# pwd
/root/kibana/6/index-pattern
[root@elk-server index-pattern]# sed -i 's#filebeat\-\*#nginx\-\*#g' filebeat.json
4、导入当前的修改过的 kibana
filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/
filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/
[root@elk-server /]# filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
5、重启 filebeat kibana elasticsearch
2、举例画图
访问最多的ip地址前10
访问最多的 url 前10
访问最多的agent
http状态码
开始画图
--设置图形形态--
点击 上面 save 可保存到 视图面板中, 饼图
图表类型画图
画面板图 ,即使用已画好的图 添加到面板中
使用Redis做缓存
当 es 数据存储遇到瓶颈时, 可以使用 redis 存储数据,并使用 logstash来获取 redis数据
https://www.elastic.co/guide/en/beats/filebeat/6.6/redis-output.html
架构如下:
output:
语法:
output.redis:
hosts: ["localhost"]
password: "my_password"
key: "filebeat"
db: 0
timeout: 5
启动 redis
[root@elk-server redis]# redis-server redis.conf
[root@elk-server redis]# ps -ef |grep redis
root 22573 1 0 20:49 ? 00:00:00 redis-server 192.168.208.120:6379
root 22586 14139 0 20:50 pts/0 00:00:00 grep --color=auto redis
[root@elk-server redis]# pwd
/tool/redis
配置 filebeat配置文件 filebeat.yml
output.redis:
hosts: ["192.168.208.120"]
key: "filebeat"
db: 0
timeout: 5
重启 filebeat
[root@elk-server filebeat]# systemctl restart filebeat
[root@elk-server filebeat]# redis-cli -h 192.168.208.120
192.168.208.120:6379> keys *
1) "filebeat"
192.168.208.120:6379> type filebeat
list
192.168.208.120:6379> LLEN filebeat
(integer) 26
192.168.208.120:6379> LRANGE filebeat
(error) ERR wrong number of arguments for 'lrange' command
192.168.208.120:6379> LRANGE filebeat 1 20
安装 logstash 6.6.0
[root@elk-server tool]# yum localinstall logstash-6.6.0.rpm
修改 filebeat 配置 nginx 日志
[root@elk-server conf.d]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
json.keys_under_root: true
json.overwrite_keys: true
- type: log
enabled: true
paths:
- /var/log/elasticsearch/elasticsearch.log
tags: ["elastic_java"]
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
setup.kibana:
host: "192.168.208.120:5601"
output.redis:
hosts: ["192.168.208.120"]
keys:
- key: "nginx_access" # 写入 redis 的键
when.contains:
tags: "access"
- key: "nginx_error"
when.contains:
tags: "error"
配置 logstash 配置文件 input 从redis取数据 output 输出数据到 es
添加 redis.conf
[root@elk-server conf.d]# pwd
/etc/logstash/conf.d
[root@elk-server conf.d]# vim redis.conf
input {
redis {
host => "192.168.208.120"
port => "6379"
db => "0"
key => "nginx_access" # 从 redis 的键中取值
data_type => "list"
}
redis {
host => "192.168.208.120"
port => "6379"
db => "0"
key => "nginx_error"
data_type => "list"
}
}
# nginx 与 php 的时间解析 改为float
filter {
mutate {
convert => ["upstream_time", "float"]
convert => ["request_time", "float"]
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM.dd}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM.dd}"
}
}
}
重启 filebeat logstash(前台启动)
[root@elk-server logstash]# systemctl restart filebeat
[root@elk-server logstash]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
查看 redis:
优化 filebeat logstash 配置文件 如下:
filebeat
[root@elk-server conf.d]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
json.keys_under_root: true
json.overwrite_keys: true
- type: log
enabled: true
paths:
- /var/log/elasticsearch/elasticsearch.log
tags: ["elastic_java"]
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
setup.kibana:
host: "192.168.208.120:5601"
output.redis:
hosts: ["192.168.208.120"]
key: "access" # 只存一个 key 到 redis
logstash 配置
[root@elk-server logstash]# vim logstash.yml
input {
redis {
host => "192.168.208.120"
port => "6379"
db => "0"
key => "nginx" # 从 redis 的键中取值 只取 一个 key 就可以了
data_type => "list"
}
}
# nginx 与 php 的时间解析 改为float
filter {
mutate {
convert => ["upstream_time", "float"]
convert => ["request_time", "float"]
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM.dd}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM.dd}"
}
}
}
kibana监控 es集群
elk 版本统一
使用Kafka作来缓存
环境安装准备
下载地址
http://zookeeper.apache.org/releases.html
wget https://downloads.apache.org/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
http://kafka.apache.org/downloads.html
三台 服务器配置 hosts, 可以相互ping 通
vim /etc/hosts
192.168.208.120 server
192.168.208.121 node1
192.168.208.122 node2
同步节点
[root@elk-server ~]# cd /etc/
[root@elk-server etc]# scp hosts node1:/etc/
[root@elk-server etc]# scp hosts node2:/etc/
安装zookeeper
zookeeper: 集群调度工具
[root@elk-server tool]# ll zookeeper-3.4.14.tar.gz kafka-2.4.1-src.tgz
-rw-r--r-- 1 root root 7690352 4月 15 17:22 kafka-2.4.1-src.tgz
-rw-r--r-- 1 root root 3096576 4月 15 17:23 zookeeper-3.4.11.tar.gz
[root@elk-server opt]# tar -xzvf apache-zookeeper-3.4.11.tar.gz -C /opt
[root@elk-server opt]# ln -s apache-zookeeper-3.4.11 zookeeper
[root@elk-server opt]# mkdir -p /data/zookeeper
[root@elk-server opt]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
修改配置文件
[root@elk-server opt]# vim /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.208.120:2888:3888
server.2=192.168.208.121:2888:3888
server.3=192.168.208.122:2888:3888
增加 myid
[root@elk-server opt]# echo "1" > /data/zookeeper/myid
其它节点一样的配置, 只是这个 myid 不同
rsync -avz /opt/zookeeper-3.4.11 node1:/opt/
rsync -avz /data/zookeeper node1:/data
node1:
[root@node1 #] echo "2" > /data/zookeeper/myid
rsync -avz /opt/zookeeper-3.4.11 node2:/opt/
rsync -avz /data/zookeeper node2:/data
node2:
[root@node2 #] echo "3" > /data/zookeeper/myid
注: 2888:3888 一个同步数据使用端口, 一个集群选择用端口
myid 是与 server.id 是相对应的
zookeeper 配置完成
启动 三台机器启动
[root@elk-server /]# /opt/zookeeper/bin/zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
查看:
[root@node1 opt]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node2 opt]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
三台机器启动后, 只能故障一台, 这样启动就是集群了
连接测试
[root@elk_server zookeeper]# /opt/zookeeper/bin/zkCli.sh -server 192.168.208.120:2181
[root@node1 zookeeper]# /opt/zookeeper/bin/zkCli.sh -server 192.168.208.121:2181
[root@node2 zookeeper]# /opt/zookeeper/bin/zkCli.sh -server 192.168.208.122:2181
连接后测试发送消息:
create /test "hello"
[zk: 192.168.208.120:2181(CONNECTED) 0] create /test "hello"
Created /test
其它节点接收消息
get /test
[zk: 192.168.208.122:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000005
ctime = Wed Apr 15 18:57:45 CST 2020
mZxid = 0x100000005
mtime = Wed Apr 15 18:57:45 CST 2020
pZxid = 0x100000005
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
安装kafka
server节点: 192.168.208.120
[root@elk-server tool]# tar -xzf /tool/kafka-2.11-2.4.tgz -C /opt
[root@elk-server tool]# ln -s /opt/kafka-2.11-2.4 /opt/kafka
[root@elk-server tool]# mkdir -p /data/kafka/logs/
[root@elk-server tool]# vim /opt/kafka/config/server.properties
broker.id=1
listeners=PLAINTEXT://192.168.208.120:9092
log.dirs=/data/kafka/logs
log.retention.hours=24
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181
其它节点相同配置:
[root@elk-server tool]# rsync -avz /opt/kafka-2.11-2.4 node1:/opt/
[root@elk-server tool]# rsync -avz /opt/kafka-2.11-2.4 node2:/opt/
node1:
[root@node1 opt]# ln -s /opt//opt/kafka-2.11-2.4 /opt/kafka
[root@node1 opt]# mkdir -p /data/kafka/logs
[root@node1 opt]# vim kafka/config/server.properties
broker.id=2
listeners=PLAINTEXT://192.168.208.121:9092
log.dirs=/data/kafka/logs
log.retention.hours=24
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181
node2:
[root@node2 opt]# ln -s /opt//opt/kafka-2.11-2.4 /opt/kafka
[root@node2 opt]# mkdir -p /data/kafka/logs
[root@node2 opt]# vim kafka/config/server.properties
broker.id=3
listeners=PLAINTEXT://192.168.208.122:9092
log.dirs=/data/kafka/logs
log.retention.hours=24
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181
启动:
---前台启动 , 没有报错了 就可以后台启动了
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
启动日志如下:
[2020-04-15 19:33:07,920] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-04-15 19:33:08,242] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-04-15 19:33:08,243] INFO Kafka version: 2.4.1 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,243] INFO Kafka commitId: c57222ae8cd7866b (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,243] INFO Kafka startTimeMs: 1586950388242 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,244] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
-- 后台启动
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@elk-server kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@node1 opt]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@node2 opt]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
三台启动后 进行测试
elk_server 创建:
/opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --partitions 3 --replication-factor 3 --topic kafkatest
node1 获取
/opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --topic kafkatest
[root@node1 opt]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --topic kafkatest
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic: kafkatest PartitionCount: 3 ReplicationFactor: 3 Configs:
Topic: kafkatest Partition: 0 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: kafkatest Partition: 1 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: kafkatest Partition: 2 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
node2 获取
[root@node2 opt]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --topic kafkatest
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic: kafkatest PartitionCount: 3 ReplicationFactor: 3 Configs:
Topic: kafkatest Partition: 0 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: kafkatest Partition: 1 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: kafkatest Partition: 2 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
配置kafka做缓存
配置 filebeat 的 output
1、修改filebeat
语法:
output.kafka:
hosts: ["192.168.208.120:9092","192.168.208.121:9092","192.168.208.122:9092",]
topic: elklog
配置文件如下:
[root@elk-server filebeat]# vim filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
json.keys_under_root: true
json.overwrite_keys: true
- type: log
enabled: true
paths:
- /var/log/elasticsearch/elasticsearch.log
tags: ["elastic_java"]
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
setup.kibana:
host: "192.168.208.120:5601"
output.kafka:
hosts: ["192.168.208.120:9092","192.168.208.121:9092","192.168.208.122:9092",]
topic: elklog #频道
2、修改logstash
[root@elk-server conf.d]# vim /etc/logstash/conf.d/kafka.conf
input{
kafka{
bootstrap_servers=>"192.168.208.120:9092" # kafka集群任意一个IP
topics=>["elklog"]
group_id=>"logstash"
codec => "json"
}
}
filter {
mutate {
convert => ["upstream_time", "float"]
convert => ["request_time", "float"]
}
}
output {
if "access" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM.dd}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://192.168.208.120:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM.dd}"
}
}
}
3、重启服务
[root@elk-server /]# systemctl restart filebeat
[root@elk-server /]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf
此时再刷新 nginx 访问数据, 数据就写到 elasticsearch 了
=======================================================================================
elasticsearch 启动端口号: 9200 9300 一个是数据访问端口, 一个是集群端口
kibana 启动端口号: 5601
filebeat
logstash
zookeeper 启动端口号: 2888、3888、2181, 2181:kafka与zookeeper连接端口,2888/3888同步选举
kafka 启动端口号: 9092
redis 启动端口号: 6379
终极架构
nginx+keepalived --> redis --> logstash ->es -- kibana
nginx+keepalived --> kafka --> logstash ->es -- kibana