一次系统扩容引起的elasticsearch故障及恢复

1.背景

在公司搭建了elk集群,集群配置如下:

机器名 CPU 内存 硬盘 raid 操作系统 部署软件
m21p22 Intel(R) Xeon(R)CPUE5620@2.40GHz X16 148G 2T raid1 centos5.8 redis kibana
m21p23 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 160G 7T raid1+0 centos6.8 logstash elasticsearch*2
m21p24 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 160G 7T raid1+0 centos6.8 logstash elasticsearch*2

由于m21p22服务器配置比较老旧,而且上面还有其他人部署的其他应用。硬盘写入性能比较差,因此考虑吧elasticsearch部署在另外两台配置高的服务器,而将kibana、redis等与硬盘关系不大的软件部署在m21p22服务器。考虑到部署的复杂性以及服务器的实际情况,选择了redis接收beats的日志数据,再通过logstash实现负载均衡。这是之前elk集群的配置情况。
随着业务的深入,上述集群已经越来越难以满足业务的需要,日志量大会在redis中出现堆积,另外服务器查询量大之后,节点的cpu和load会触发告警。因此,与运维部门商议,又申请了2台服务器,作为elasticsearch的扩展节点,以降低原有服务器的负载。
如下是增加服务器之后的配置信息:

机器名 CPU 内存 硬盘 raid 操作系统 部署软件
m21p22 Intel(R) Xeon(R)CPUE5620@2.40GHz X16 148G 2T raid1 centos5.8 redis kibana
m21p23 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 160G 7T raid1+0 centos6.8 logstash elasticsearch*2
m21p24 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 160G 7T raid1+0 centos6.8 logstash elasticsearch*2
m21p88 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 128G 7T raid1+0 centos6.5 elasticsearch*2
m21p89 Intel(R) Xeon(R)CPUE5-2620v2@2.10GHz X24 128G 7T raid1+0 centos6.5 elasticsearch*2

在新增加服务器到位之后,第一件事情就是决定将elasticsearch扩容。每台服务器部署2个节点,将原有集群扩大一倍,由4个节点扩大到8个节点。

2.扩容操作

原有节点:

节点名称 服务器 http端口 rack Xms&Xmx
node1 192.168.21.23 9201 rack1 20G
node2 192.168.21.23 9202 rack1 20G
node3 192.168.21.24 9203 rack2 20G
node4 192.168.21.24 9204 rack2 20G

扩展节点:

节点名称 服务器 http端口 rack Xms&Xmx
node5 192.168.21.88 9205 rack3 20G
node6 192.168.21.88 9206 rack3 20G
node7 192.168.21.89 9207 rack4 20G
node8 192.168.21.89 9208 rack4 20G

按上述配置对新增节点进行扩展,只需要配置好参数:
discovery.zen.ping.unicast.hosts: ["192.168.21.23", "192.168.21.24","192.168.21.88","192.168.21.89"]
新增节点就可以加入集群进行水平扩容。当上述操作都完成之后,新节点已加入集群,开始同步数据。

考虑到系统并未设置索引分片,全部索引一律采用的是系统默认的5个分片,而每个索引的数据可能大小不一,结果检查,决定将数据量较大的索引,分片数增加一倍。
操作如下:

curl -XPOST http://192.168.21.88:9205/_template/nginx -d '{"order":0,"template":"nginx-*","settings":{"index":{"number_of_shards":"10","refresh_interval":"5s"}},"mappings":{},"aliases":{}}'

curl -XPUT http://192.168.21.88:9205/_template/applog-yufa -d '{"order":0,"template":"applog-yufa-*","settings":{"number_of_shards":10,"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"strings_as_keywords":{"mapping":{"norms":false,"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"match_mapping_type":"string"}}],"_all":{"norms":{"enabled":false},"enabled":true},"properties":{"@timestamp":{"type":"date"},"offset":{"type":"long","doc_values":"true"},"level":{"type":"keyword"},"appName":{"type":"keyword"},"beat":{"properties":{"hostname":{"type":"keyword"},"name":{"type":"keyword"}}},"input_type":{"type":"keyword"},"source":{"type":"keyword"},"message":{"type":"text"},"type":{"type":"keyword"},"class":{"type":"keyword"},"threadName":{"type":"keyword"},"timestamp":{"type":"keyword"}}}},"aliases":{}}

curl -XPUT http://192.168.21.88:9205/_template/applog-prod -d '{"order":0,"template":"applog-prod-*","settings":{"number_of_shards":10,"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"strings_as_keywords":{"mapping":{"norms":false,"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"match_mapping_type":"string"}}],"_all":{"norms":{"enabled":false},"enabled":true},"properties":{"@timestamp":{"type":"date"},"offset":{"type":"long","doc_values":"true"},"level":{"type":"keyword"},"appName":{"type":"keyword"},"beat":{"properties":{"hostname":{"type":"keyword"},"name":{"type":"keyword"}}},"input_type":{"type":"keyword"},"source":{"type":"keyword"},"message":{"type":"text"},"type":{"type":"keyword"},"class":{"type":"keyword"},"threadName":{"type":"keyword"},"timestamp":{"type":"keyword"}}}},"aliases":{}}‘

注意,在采取put操作时,先采用get操作,得到_template信息,之后对需要修改的部分进行增加或者修改操作,之后再进行put。这样保证其他不需要修改的数据不会被修改。

在做完上述这一切之后,已是晚上8点,因此打卡下班。

3.故障描述

早上还没到单位,就被同事信息轰炸,elk集群已经不能用了!!
我赶到单位之后,连上服务器一看,新加入的节点全部处于假死状态。已经有大部分数据同步到新节点,而且服务器已无法连接。通知运维人员将elasticsearch进程干掉。
如下是恢复的过程中某个节点假死查到的状态:


节点假死

对上述节点进行重启,集群状态恢复中。。


health为red状态

检查redis,发现有大量数据堆积
redis keys

redis内存在迅速增加,已经达到10个G

redis内存

查看logstash日志:

logstash日志

出现如下提示:

retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[nginx-2017.08.16][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [nginx-2017.08.16] containing [2] requests]"})

原来是宕机节点太多,导致部分节点的主分片未分配。这样日志在写入过程中会超时。这就导致logstash的写入速度下降。从而导致redis中数据增加。

至于节点假死的原因,查看了elasticsearch的日志:

org.elasticsearch.transport.RemoteTransportException: [node6][192.168.21.88:9301][indices:data/write/bulk[s][r]]
Caused by: org.elasticsearch.index.engine.IndexFailedEngineException: Index failed for [applog-prod#AV3nwvfEpn_W3_8eumB2]
        at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:552) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:542) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:166) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onReplicaShard(TransportShardBulkAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onReplicaShard(TransportShardBulkAction.java:74) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:85) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.apache.lucene.store.AlreadyClosedException: translog is already closed
        at org.elasticsearch.index.translog.Translog.ensureOpen(Translog.java:1342) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.add(Translog.java:433) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.maybeAddToTranslog(InternalEngine.java:393) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:524) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:409) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:552) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:542) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:166) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:85) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_101]
Caused by: java.nio.file.FileSystemException: /opt/elasticsearch/node6/data/nodes/0/indices/NP2tORUfSq6jl0lb5CzOVw/2/translog/translog.ckp: Too many open files in system
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[?:?]
        at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[?:1.8.0_101]
        at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[?:1.8.0_101]
        at org.elasticsearch.index.translog.Checkpoint.write(Checkpoint.java:127) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.TranslogWriter.writeCheckpoint(TranslogWriter.java:312) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.TranslogWriter.syncUpTo(TranslogWriter.java:273) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:553) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:578) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard$1.write(IndexShard.java:1679) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:107) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcess(AsyncIOProcessor.java:99) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.sync(IndexShard.java:1701) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction$AsyncAfterWriteAction.run(TransportWriteAction.java:310) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction$WriteReplicaResult.<init>(TransportWriteAction.java:177) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_101]

发现一个关键的问题:/opt/elasticsearch/node6/data/nodes/0/indices/NP2tORUfSq6jl0lb5CzOVw/2/translog/translog.ckp: Too many open files in system
也就是说同时打开的文件数达到了系统的限制,这也就是无法登陆系统的原因。
不难理解上述问题的出现:一个服务器中配置了两个节点,这两个节点都运行在elastic用户下,该用户所在系统的limit.conf中对该用户同时打开的文件数有限制。而在集群同步数据的过程中,系统在大量的写文件,同时实时数据又在大量写入。这样就导致文件达到最大的阈值。因此导致elasticsearch假死。

4.解决办法

查询elasticsearch节点状态:

curl -XGET http://192.168.21.23:9203/_cat/shards |fgrep UNASSIGNED
节点状态

当天需要写入数据的索引,也存在部分分片未分配状态:

curl -XGET http://192.168.21.23:9203/_cat/shards |fgrep UNASSIGNED |grep '2017.08.16'
当天索引状态

通过kibana也可发现该问题:

![kibana状态](http://upload-images.jianshu.io/upload_images/3237432-79af260a648503ef.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

考虑到redis缓存一直增加,当务之急是让数据可以写入。保证redis的数据被消费。否则会出现redis服务器内存溢出。
先不考虑elasticsearch是否能自动恢复,以及自动恢复所花费的时间。
查询API后,要用到命令:reroute
通过kibana分配一个主分片

  POST /_cluster/reroute
  {
  "commands": [
    {
      "allocate_stale_primary": {
        "index": "applog-prod-2017.08.16",
        "shard": 2,
        "node": "node7",
        "accept_data_loss" : true
      }
    }
  ]
}
分配主分区

命令格式参考https://kibana.logstash.es/content/elasticsearch/principle/shard-allocate.html

需要注意的是,对于主分片执行reroute,一定需要所分配的节点上存在该索引的文件,否则会提示找不到索引:

[2017-08-23T19:40:24,000][WARN ][o.e.i.c.IndicesClusterStateService] [node1-1] [[applog-prod-2017.07.26][1]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [applog-prod-2017.07.26][1]: Recovery failed on {node1-1}{JIKzocuZRtec_XkrM1eXDg}{JN_7zoJKRBCSNTCgoDzpuQ}{192.168.21.23}{192.168.21.23:9300}{rack=r1, ml.enabled=true}
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1491) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.1.jar:5.5.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed to fetch index version after copying it over
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:344) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: shard allocated for local recovery (post api), should exist, but doesn't, current files: []
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:329) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in store(mmapfs(/opt/elasticsearch/elasticsearch-node1-1/data/nodes/0/indices/dYkqpjsZRw2apPsylTHziQ/1/index)): files: []
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:687) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:129) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:199) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:184) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:319) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more

在分配主分片的时候,一定要确认所分配的节点是否存在该索引的文件。

5.其他思考

上述问题 虽然恢复,数据可以写入,但是还有几个地方需要优化:
1.redis作为负载均衡存在问题,在服务器节点充足之后,应该用kafka来替代redis,将数据放置在磁盘。避免redis内存溢出。
2.elasticsearch扩容时,如果有多个节点,尽量避免同时操作。如果一次性增加的节点比较多,就需要考虑文件是否达到最大limit配置的阈值。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,456评论 5 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,370评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,337评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,583评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,596评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,572评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,936评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,595评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,850评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,601评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,685评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,371评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,951评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,934评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,167评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,636评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,411评论 2 342

推荐阅读更多精彩内容