Flink 使用介绍相关文档目录
问题环境
Flink 1.8.1
错误现象
提交WordCount任务到Yarn集群的时候出现如下错误。任务无法正常运行。
KillApplication over rm1 after 29 failover attempts. Trying to failover immediately.
java.io.IOException: The client is stopped
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
at org.apache.hadoop.ipc.Client.call(Client.java:1345)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy8.forceKillApplication(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.forceKillApplication(ApplicationClientProtocolPBClientImpl.java:213)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
at com.sun.proxy.$Proxy9.forceKillApplication(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.killApplication(YarnClientImpl.java:439)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.killApplication(YarnClientImpl.java:419)
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.failSessionDuringDeployment(AbstractYarnClusterDescriptor.java:1204)
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.access$200(AbstractYarnClusterDescriptor.java:111)
at org.apache.flink.yarn.AbstractYarnClusterDescriptor$DeploymentFailureHook.run(AbstractYarnClusterDescriptor.java:1500)
原因分析
猛一看似乎无头绪,shell中没有任何能直接判断出来错误原因的信息。然而点开Yarn UI之后发现能够找到此次运行失败的Flink任务。问题原因可以从这里开始分析。
查看Yarn UI上的失败任务的jobmanager.log日志,可见错误为:
Caused by: org.apache.flink.configuration.IllegalConfigurationException: akka.watch.heartbeat.pause [10 s] must greater then akka.watch.heartbeat.interval [60 s]
at org.apache.flink.runtime.akka.AkkaUtils$.validateHeartbeat(AkkaUtils.scala:371)
at org.apache.flink.runtime.akka.AkkaUtils$.getRemoteAkkaConfig(AkkaUtils.scala:425)
at org.apache.flink.runtime.akka.AkkaUtils$.getAkkaConfig(AkkaUtils.scala:218)
at org.apache.flink.runtime.akka.AkkaUtils.getAkkaConfig(AkkaUtils.scala)
at org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:247)
... 15 more
表明问题是flink-conf.yaml
中akka.watch.heartbeat.pause
配置的时间比akka.watch.heartbeat.interval
短,这样配置不合理,心跳检测必定会超时。查看flink-conf.yaml
发现配置为:
# 可接受的心跳暂停最大时长
akka.watch.heartbeat.pause: 10s
# 心跳间隔时长
akka.watch.heartbeat.interval: 60s
解决方式:改为默认值或者移除掉上述两个配置。
akka.watch.heartbeat.pause: 60s
akka.watch.heartbeat.interval: 10s
备注
Flink 1.17.1等较新版本没有上述的两个配置项。无此问题。
启示
Flink通过shell提交任务遇到问题,有时候无法直接获取到有用的信息。这时候需要通过Yarn UI来查看Flink作业的jobmanager.log
日志,有用的信息往往在这里。
参考链接
https://nightlies.apache.org/flink/flink-docs-release-1.8/ops/config.html