ELK on docker-compose

ELK是3个组件的首字母缩写,3个字母分别代表了ElasticsearchLogstashKibana,用于日志采集,检索和展示。按日志数据流向,三者先后顺序实际上为LEK。
后续做日志聚合,一定离不开ELK,于是先用docker试着跑跑elasticsearchkibanalogstash
文末给出了一个docker-compose.yml,这种交付方式实际上把整个集群用代码形式包装了起来,在任何一台能获取有关镜像的、装有docker-compose的机器上均可无人工干预地复制出一套集群来。经过这一遭,笔者也更进一步理解了docker对于交付这件事的改进,也算实践了一把“everything as code”
按照官网,Install Elasticsearch with DockerRunning Kibana on DockerConfiguring Logstash for Docker一步步操作即可

elasticsearch

对于elasticsearch,拉取镜像,命令如下:

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.7.1

如需启动,命令如下:

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.7.1

kibana

对于kibana,拉取镜像,命令如下:

docker pull docker.elastic.co/kibana/kibana:6.7.1

如需启动,命令如下:

docker run -p 5601:5601 --link a87deb7c0173:elasticsearch docker.elastic.co/kibana/kibana:6.7.1

其中--link的依据为前述文档中的默认变量值小节

elasticsearch.hosts http://elasticsearch:9200

然后查找已启动elasticsearch的容器

$ docker ps 
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                            NAMES
a87deb7c0173        docker.elastic.co/elasticsearch/elasticsearch:6.6.2   "/usr/local/bin/dock…"   2 hours ago         Up 31 minutes       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   hungry_jones

就有了上述--link的写法

logstash

拉取镜像的命令如下:

docker pull docker.elastic.co/logstash/logstash:6.7.1

先试验一些基本功能,执行如下命令:

docker run -it --rm docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { stdout{} }'

最后的命令相当于指定了一个/usr/share/logstash/config/logstash.yml

input {
  stdin{}
} 
output { 
  stdout{}
}

可见日志:

$ docker run -it --rm docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { stdout{} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-04-07T16:31:41,011][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-04-07T16:31:41,059][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-04-07T16:31:41,097][WARN ][logstash.runner          ] Deprecated setting `xpack.monitoring.elasticsearch.url` please use `xpack.monitoring.elasticsearch.hosts`
[2019-04-07T16:31:41,990][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-07T16:31:42,009][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2019-04-07T16:31:42,095][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"a81dbe0b-66b2-4927-bbea-6b01ca2e60b5", :path=>"/usr/share/logstash/data/uuid"}
[2019-04-07T16:31:43,410][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-04-07T16:31:45,672][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:31:56,333][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
[2019-04-07T16:32:06,477][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-04-07T16:32:06,492][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
[2019-04-07T16:32:06,578][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2019-04-07T16:32:16,363][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-07T16:32:16,518][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x58978a55 run>"}
The stdin plugin is now waiting for input:
[2019-04-07T16:32:16,634][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-07T16:32:17,201][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

此时通过stdin输入信息,可回显在stdout上,日志类似于:

hello
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
    "@timestamp" => 2019-04-07T16:34:24.458Z,
       "message" => "hello",
          "host" => "525c92594d4a",
      "@version" => "1"
}

再连接上elasticsearch,命令如下:

docker run -it --rm --link a433f5cfdc57:elasticsearch docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["elasticsearch:9200"] } }'

其中output部分有变化:

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
  } 
}

日志如下:

$ docker run -it --rm --link a433f5cfdc57:elasticsearch docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["elasticsearch:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-04-07T16:48:03,039][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-04-07T16:48:03,077][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-04-07T16:48:03,091][WARN ][logstash.runner          ] Deprecated setting `xpack.monitoring.elasticsearch.url` please use `xpack.monitoring.elasticsearch.hosts`
[2019-04-07T16:48:03,944][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-07T16:48:03,963][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2019-04-07T16:48:04,028][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"04e7b737-aca4-4fe1-b1a6-f5d0d6a60b39", :path=>"/usr/share/logstash/data/uuid"}
[2019-04-07T16:48:05,241][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-04-07T16:48:07,217][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:07,705][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:07,905][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:07,916][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:08,262][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-04-07T16:48:08,266][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-04-07T16:48:17,704][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-07T16:48:17,861][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:17,903][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:17,920][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:17,923][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:17,964][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-07T16:48:17,980][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-04-07T16:48:18,027][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-07T16:48:18,156][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2019-04-07T16:48:18,195][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1a741f53 run>"}
The stdin plugin is now waiting for input:
[2019-04-07T16:48:18,383][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-07T16:48:19,519][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=6&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"79d6d565e50fe66f5b522da92c3f148f42013ca30f089ab756df0e5573e82c9c", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_ad64b9b4-cc69-4977-b33a-3ce7ef84e0bc", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-04-07T16:48:19,541][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2019-04-07T16:48:19,582][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:19,597][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:19,606][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:19,606][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:19,613][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-04-07T16:48:19,681][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x5e523e70 sleep>"}
[2019-04-07T16:48:19,688][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-04-07T16:48:20,147][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

和配置kibana--link时一样,查看logstash镜像默认配置,可见对elasticsearch的配置

xpack.monitoring.elasticsearch.hosts http://elasticsearch:9200

将ELK连接好

在控制台随意输入些内容,通过kibana,在pattern处输入logstash-*

连接好后配置indices

再返回Discover选项卡即可看到收集到的日志
收集到的日志

给出一个docker-compose.yml

version: '2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.7.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      discovery.type: single-node
  kibana:
    image: docker.elastic.co/kibana/kibana:6.7.1
    depends_on:
      - elasticsearch
    ports:
      - "5601:5601"
  logstash:
    image: docker.elastic.co/logstash/logstash:6.7.1
    depends_on:
      - elasticsearch
    ports:
      - "9600:9600"
    stdin_open: true
    tty: true
    entrypoint: logstash -e 'input { stdin{} }  output { elasticsearch {  hosts => ["elasticsearch:9200"] } }'

其中logstash部分的stdin_opentty,相当于docker run -it,表示接收标准输入,但是不成功

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 196,099评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,473评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 143,229评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,570评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,427评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,335评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,737评论 3 386
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,392评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,693评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,730评论 2 312
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,512评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,349评论 3 314
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,750评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,017评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,290评论 1 251
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,706评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,904评论 2 335

推荐阅读更多精彩内容