Advertisement
Guest User

Untitled

a guest
Jan 25th, 2019
153
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 53.63 KB | None | 0 0
  1. [2019-01-25T10:43:19,135][INFO ][o.e.n.Node ] [] initializing ...
  2. [2019-01-25T10:43:19,235][INFO ][o.e.e.NodeEnvironment ] [wAVZzky] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [1.5tb], net total_space [1.7tb], spins? [possibly], types [ext4]
  3. [2019-01-25T10:43:19,235][INFO ][o.e.e.NodeEnvironment ] [wAVZzky] heap size [1.9gb], compressed ordinary object pointers [true]
  4. [2019-01-25T10:43:19,248][INFO ][o.e.n.Node ] node name [wAVZzky] derived from node ID [wAVZzkyrSfyBmuGRw0Bltw]; set [node.name] to override
  5. [2019-01-25T10:43:19,248][INFO ][o.e.n.Node ] version[5.6.9], pid[8909], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.9.0-8-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
  6. [2019-01-25T10:43:19,249][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/christian/elasticsearch-5.6.9-3]
  7. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [aggs-matrix-stats]
  8. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [ingest-common]
  9. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-expression]
  10. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-groovy]
  11. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-mustache]
  12. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-painless]
  13. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [parent-join]
  14. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [percolator]
  15. [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [reindex]
  16. [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [transport-netty3]
  17. [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [transport-netty4]
  18. [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] no plugins loaded
  19. [2019-01-25T10:43:20,588][INFO ][o.e.d.DiscoveryModule ] [wAVZzky] using discovery type [zen]
  20. [2019-01-25T10:43:21,096][INFO ][o.e.n.Node ] initialized
  21. [2019-01-25T10:43:21,096][INFO ][o.e.n.Node ] [wAVZzky] starting ...
  22. [2019-01-25T10:43:21,181][INFO ][o.e.t.TransportService ] [wAVZzky] publish_address {10.231.83.146:9302}, bound_addresses {10.231.83.146:9302}
  23. [2019-01-25T10:43:21,187][INFO ][o.e.b.BootstrapChecks ] [wAVZzky] bound or publishing to a non-loopback address, enforcing bootstrap checks
  24. [2019-01-25T10:43:24,527][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}, added {{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300},{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301},}, reason: zen-disco-receive(from master [master {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300} committed version [188]])
  25. [2019-01-25T10:43:25,154][INFO ][o.e.h.n.Netty4HttpServerTransport] [wAVZzky] publish_address {10.231.83.146:9202}, bound_addresses {10.231.83.146:9202}
  26. [2019-01-25T10:43:25,154][INFO ][o.e.n.Node ] [wAVZzky] started
  27. [2019-01-25T10:44:30,286][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] master_left [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], reason [failed to ping, tried [3] times, each with maximum [4s] timeout]
  28. [2019-01-25T10:44:30,288][WARN ][o.e.d.z.ZenDiscovery ] [wAVZzky] master left (reason = failed to ping, tried [3] times, each with maximum [4s] timeout), current nodes: nodes:
  29. {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}, master
  30. {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}
  31. {wAVZzky}{wAVZzkyrSfyBmuGRw0Bltw}{Qc8kODytQreZFj_24t5O7g}{10.231.83.146}{10.231.83.146:9302}, local
  32.  
  33. [2019-01-25T10:44:30,697][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  34. [2019-01-25T10:44:32,707][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  35. [2019-01-25T10:44:33,324][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, reason: zen-disco-receive(from master [master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301} committed version [200]])
  36. [2019-01-25T10:44:34,041][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  37. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [131] timed out after [3750ms]
  38. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  39. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  40. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  41. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  42. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  43. [2019-01-25T10:44:35,042][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  44. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [132] timed out after [3750ms]
  45. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  46. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  47. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  48. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  49. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  50. [2019-01-25T10:44:36,041][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  51. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [136] timed out after [3750ms]
  52. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  53. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  54. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  55. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  56. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  57. [2019-01-25T10:44:46,328][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] master_left [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], reason [failed to ping, tried [3] times, each with maximum [4s] timeout]
  58. [2019-01-25T10:44:46,329][WARN ][o.e.d.z.ZenDiscovery ] [wAVZzky] master left (reason = failed to ping, tried [3] times, each with maximum [4s] timeout), current nodes: nodes:
  59. {wAVZzky}{wAVZzkyrSfyBmuGRw0Bltw}{Qc8kODytQreZFj_24t5O7g}{10.231.83.146}{10.231.83.146:9302}, local
  60. {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, master
  61. {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}
  62.  
  63. [2019-01-25T10:44:46,777][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  64. [2019-01-25T10:44:48,781][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  65. [2019-01-25T10:44:50,795][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  66. [2019-01-25T10:44:52,804][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  67. [2019-01-25T10:44:54,815][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  68. [2019-01-25T10:44:56,824][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  69. [2019-01-25T10:44:58,834][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  70. [2019-01-25T10:45:00,844][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  71. [2019-01-25T10:45:02,854][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  72. [2019-01-25T10:45:03,339][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [29013ms] ago, timed out [25013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [141]
  73. [2019-01-25T10:45:03,340][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [25014ms] ago, timed out [21013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [144]
  74. [2019-01-25T10:45:03,340][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [21013ms] ago, timed out [17013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [147]
  75. [2019-01-25T10:45:04,863][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
  76. [2019-01-25T10:45:06,340][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, reason: zen-disco-receive(from master [master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301} committed version [201]])
  77. [2019-01-25T10:45:07,079][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  78. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [150] timed out after [3750ms]
  79. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  80. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  81. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  82. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  83. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  84. [2019-01-25T10:45:08,080][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  85. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [153] timed out after [3750ms]
  86. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  87. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  88. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  89. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  90. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  91. [2019-01-25T10:45:09,079][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
  92. org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [156] timed out after [3750ms]
  93. at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
  94. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  95. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  96. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  97. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  98. ^C[2019-01-25T10:45:13,846][INFO ][o.e.n.Node ] [wAVZzky] stopping ...
  99. [2019-01-25T10:45:13,852][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] failed to send join request to master [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], reason [IllegalStateException[Future got interrupted]; nested: InterruptedException; ]
  100. [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  101. [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  102. [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  103. [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  104. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  105. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  106. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  107. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  108. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  109. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  110. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  111. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  112. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  113. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  114. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  115. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  116. [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
  117. org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  118. [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  119. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  120. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  121. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  122. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  123. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  124. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  125. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  126. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  127. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  128. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  129. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  130. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  131. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  132. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  133. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  134. [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  135. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  136. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  137. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  138. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  139. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  140. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  141. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  142. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  143. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  144. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  145. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  146. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  147. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  148. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  149. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  150. [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  151. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  152. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  153. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  154. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  155. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  156. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  157. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  158. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  159. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  160. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  161. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  162. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  163. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  164. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  165. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  166. [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  167. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  168. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  169. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  170. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  171. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  172. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  173. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  174. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  175. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  176. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  177. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  178. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  179. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  180. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  181. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  182. [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  183. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  184. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  185. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  186. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  187. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  188. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  189. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  190. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  191. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  192. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  193. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  194. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  195. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  196. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  197. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  198. [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
  199. org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
  200. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  201. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  202. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  203. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  204. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  205. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  206. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  207. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  208. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  209. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  210. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  211. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  212. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  213. Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
  214. [2019-01-25T10:45:13,865][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  215. java.util.concurrent.RejectedExecutionException: event executor terminated
  216. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  217. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  218. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  219. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  220. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  221. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  222. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  223. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  224. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  225. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  226. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  227. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  228. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  229. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  230. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  231. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  232. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  233. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  234. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  235. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  236. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  237. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  238. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  239. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  240. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  241. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  242. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  243. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  244. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  245. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  246. [2019-01-25T10:45:13,865][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  247. java.util.concurrent.RejectedExecutionException: event executor terminated
  248. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  249. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  250. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  251. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  252. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  253. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  254. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  255. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  256. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  257. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  258. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  259. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  260. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  261. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  262. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  263. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  264. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  265. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  266. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  267. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  268. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  269. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  270. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  271. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  272. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  273. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  274. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  275. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  276. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  277. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  278. [2019-01-25T10:45:13,864][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  279. java.util.concurrent.RejectedExecutionException: event executor terminated
  280. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  281. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  282. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  283. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  284. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  285. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  286. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  287. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  288. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  289. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  290. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  291. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  292. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  293. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  294. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  295. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  296. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  297. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  298. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  299. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  300. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  301. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  302. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  303. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  304. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  305. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  306. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  307. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  308. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  309. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  310. [2019-01-25T10:45:13,863][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  311. java.util.concurrent.RejectedExecutionException: event executor terminated
  312. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  313. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  314. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  315. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  316. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  317. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  318. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  319. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  320. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  321. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  322. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  323. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  324. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  325. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  326. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  327. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  328. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  329. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  330. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  331. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  332. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  333. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  334. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  335. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  336. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  337. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  338. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  339. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  340. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  341. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  342. [2019-01-25T10:45:13,868][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  343. java.util.concurrent.RejectedExecutionException: event executor terminated
  344. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  345. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  346. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  347. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  348. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  349. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  350. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  351. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  352. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  353. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  354. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  355. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  356. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  357. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  358. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  359. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  360. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  361. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  362. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  363. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  364. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  365. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  366. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  367. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  368. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  369. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  370. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  371. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  372. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  373. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  374. [2019-01-25T10:45:13,864][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
  375. java.util.concurrent.RejectedExecutionException: event executor terminated
  376. at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
  377. at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
  378. at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
  379. at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
  380. at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
  381. at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
  382. at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
  383. at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
  384. at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
  385. at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
  386. at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
  387. at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
  388. at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
  389. at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
  390. at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
  391. at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
  392. at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
  393. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
  394. at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
  395. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
  396. at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
  397. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
  398. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
  399. at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
  400. at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
  401. at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
  402. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
  403. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
  404. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
  405. at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
  406. [2019-01-25T10:45:13,886][INFO ][o.e.n.Node ] [wAVZzky] stopped
  407. [2019-01-25T10:45:13,887][INFO ][o.e.n.Node ] [wAVZzky] closing ...
  408. [2019-01-25T10:45:13,892][INFO ][o.e.n.Node ] [wAVZzky] closed
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement