Elastic Stack(ELK)-2 BEAT

1. Filebeat(监控日志文件)

使用yum源安装,yum源配置和elasticsearch一样参考 Yum安装方式

yum list |grep filebeat #查看安装包
yum install filebeat -y #安装
rpm -ql  filebeat #查看安装目录

通过yum list|grep elasticsearch可以查看到如下图的所有那种包

查看安装包

部署位置

Filebeat的配置文件有很多,重点关注inputs和output即可,其他可以根据需要设置。
vim /etc/filebeat/fields.yml

#=========================== Filebeat inputs =============================

filebeat.inputs: #输入指定需要监控的文件
- type: log  #指定系统日志文件,可以根据需要在paths中添加
 paths:
   - /var/log/*.log
   - /var/log/secure
   - /var/log/messages
 tags: ["system"] #标签,用于分类
- type: log #指定nginx日志
 paths:
   - /home/app/tengine/logs/access.log
   #- c:\programdata\elasticsearch\logs\*
 tags: ["nginx"]
#============================= Filebeat modules ===============================
filebeat.config.modules: #filebeat对应日志的模版,具体可以查看modules.d目录下,这里没有使用因为目录下注销的,可以参考使用,当直接数据elasticsearch时,可以使用模版进行批评
 path: ${path.config}/modules.d/*.yml
 reload.enabled: false
#==================== Elasticsearch template setting ==========================
# 默认数据分片数
setup.template.settings:
 index.number_of_shards: 1
#================================ General  =====================================
#一般设置
#============================== Dashboards =====================================
#设置仪表板大屏,可以根据需要设置
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
#设置kibana展示
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
 # Kibana Host
 # Scheme and port can be left out and will be set to the default (http and 5601)
 # In case you specify and additional path, the scheme is required: http://localhost:5601/path
 # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
 #host: "localhost:5601"
 # Kibana Space ID
 # ID of the Kibana Space into which the dashboards should be loaded. By default,
 # the Default Space will be used.
 #space.id:
#============================= Elastic Cloud ==================================
#设置云elastic
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
#日志输出:elasticsearch,logstash,kafka,redis都可以
# Configure what output to use when sending the data collected by the beat.
#output.console:  # 指定在终端上输出日志信息
#  enable: true
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
 # Array of hosts to connect to.
# hosts: ["172.16.15.53:9200","172.16.15.26:9200"]
 # Optional protocol and basic auth credentials.
 #protocol: "https"
 #username: "elastic"
 #password: "changeme"
#----------------------------- Logstash output --------------------------------
# output.logstash:
 # # The Logstash hosts
 # hosts: ["172.16.15.53:5044"]
 # Optional SSL. By default is off.
 # List of root certificates for HTTPS server verifications
 #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
 # Certificate for SSL client authentication
 #ssl.certificate: "/etc/pki/client/cert.pem"
 # Client Certificate Key
 #ssl.key: "/etc/pki/client/cert.key"
#----------------------------- kafka output --------------------------------
output.kafka:
 enabled: true
 hosts: ["172.16.15.48:9092"]
 topic: system
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
 - add_host_metadata: ~
 - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
#安全配置
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

以上配置配件是发送到kafka中,logstash再去拉取。如果filebeat直接发送到elasticsearch集群中,可以考虑使用现成的模版和仪表盘展示。
下面可以展示支持模版的模块

[root@EASYCMDB log]# filebeat modules list
Enabled:

Disabled:
apache
auditd
elasticsearch
haproxy
icinga
iis
iptables
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
santa
suricata
system
traefik
zeek

其他模块可以类似如下:

filebeat modules enable mysql
service filebeat restart
filebeat setup --dashboards

在/etc/moudules/文件夹下编辑mysql.yml

- module: mysql
  #metricsets:
  #  - status
  #  - galera_status
  period: 10s
  # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
  # The username and password can either be set in the DSN or using the username
  # and password config options. Those specified in the DSN take precedence.
  hosts: ["tcp(127.0.0.1:3306)/"] #表示使用tcp连接
  # Username of hosts. Empty by default.
  username: xxx #用户名
  # Password of hosts. Empty by default.
  password: xxxxxxx #密码

参考:https://www.elastic.co/guide/en/beats/filebeat/7.2/filebeat-modules.html 根据需要选择对应的模版配置

2. Mysqlbeat(监控数据库性能)

参考https://github.com/adibendahan/mysqlbeat
由于是第三方开发的效果不是太好

3. Metricbeat(监控服务器性能)

参考:https://www.elastic.co/guide/en/beats/metricbeat/7.1/metricbeat-getting-started.html
yum安装
yum install -y metricbeat
配置文件metricbeat.yml

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================

metricbeat.config.modules: #可以在里面启用支持的监控系统、Apache等
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true  #改成true会直接导入

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: #设置kibana 可以用来做dashboards导入

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "172.16.15.53:5601"
  username: "elastic"
  password: "RRCOfh9UudUIVZ4i954E"
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch: #输出到elasticsearch
  # # Array of hosts to connect to.
  # hosts: ["172.16.15.53:9200","172.16.15.48:9200"]

  # # # Optional protocol and basic auth credentials.
  # # #protocol: "https"
  # username: "elastic"
  # password: "RRCOfh9UudUIVZ4i954E"
output.kafka: #输出到kafka
  # initial brokers for reading cluster metadata
  hosts: ["172.16.15.48:9092"]

  # message topic selection + partitioning
  topic: 'testTopic' #注意此处是保存的kafka主题 
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

默认情况下只可以监控system

[root@EASYCMDB ~]# metricbeat modules list #使用此命令可以查看支持的
Enabled:
system

Disabled:
aerospike
apache
aws
ceph
couchbase
couchdb
docker
dropwizard
elasticsearch
envoyproxy
etcd
golang
graphite
haproxy
http
jolokia
kafka
kibana
kubernetes
kvm
logstash
memcached
mongodb
mssql
munin
mysql
nats
nginx
php_fpm
postgresql
prometheus
rabbitmq
redis
traefik
uwsgi
vsphere
windows
zookeeper

其他监控可以类似添加

metricbeat modules enable mysql #启动接口mysql
service metricbeat restart #重启
metricbeat setup –dashboards #加载监控大盘

在/etc/moudules/文件夹下编辑mysql.yml

- module: mysql
 #metricsets:
 # - status
 # - galera_status
 period: 10s
 # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
 # The username **and** password can either be set **in** the DSN **or** using the username
 # **and** password config options. Those specified **in** the DSN take precedence.
 hosts: ["tcp(127.0.0.1:3306)/"] #表示使用tcp连接
 # Username of hosts. Empty by default.
 username: root #用户名
 # Password of hosts. Empty by default.

可以根据不同的模块参考modules对应的官方文档: https://www.elastic.co/guide/en/beats/metricbeat/7.2/metricbeat-modules.html

4. Packetbeat

参考:https://www.elastic.co/guide/en/beats/packetbeat/7.1/packetbeat-getting-started.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 196,165评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,503评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 143,295评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,589评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,439评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,342评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,749评论 3 387
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,397评论 0 255
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,700评论 1 295
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,740评论 2 313
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,523评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,364评论 3 314
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,755评论 3 300
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,024评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,297评论 1 251
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,721评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,918评论 2 336