Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [2019-01-25T10:43:19,135][INFO ][o.e.n.Node ] [] initializing ...
- [2019-01-25T10:43:19,235][INFO ][o.e.e.NodeEnvironment ] [wAVZzky] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [1.5tb], net total_space [1.7tb], spins? [possibly], types [ext4]
- [2019-01-25T10:43:19,235][INFO ][o.e.e.NodeEnvironment ] [wAVZzky] heap size [1.9gb], compressed ordinary object pointers [true]
- [2019-01-25T10:43:19,248][INFO ][o.e.n.Node ] node name [wAVZzky] derived from node ID [wAVZzkyrSfyBmuGRw0Bltw]; set [node.name] to override
- [2019-01-25T10:43:19,248][INFO ][o.e.n.Node ] version[5.6.9], pid[8909], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.9.0-8-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
- [2019-01-25T10:43:19,249][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/christian/elasticsearch-5.6.9-3]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [aggs-matrix-stats]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [ingest-common]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-expression]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-groovy]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-mustache]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [lang-painless]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [parent-join]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [percolator]
- [2019-01-25T10:43:19,741][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [reindex]
- [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [transport-netty3]
- [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] loaded module [transport-netty4]
- [2019-01-25T10:43:19,742][INFO ][o.e.p.PluginsService ] [wAVZzky] no plugins loaded
- [2019-01-25T10:43:20,588][INFO ][o.e.d.DiscoveryModule ] [wAVZzky] using discovery type [zen]
- [2019-01-25T10:43:21,096][INFO ][o.e.n.Node ] initialized
- [2019-01-25T10:43:21,096][INFO ][o.e.n.Node ] [wAVZzky] starting ...
- [2019-01-25T10:43:21,181][INFO ][o.e.t.TransportService ] [wAVZzky] publish_address {10.231.83.146:9302}, bound_addresses {10.231.83.146:9302}
- [2019-01-25T10:43:21,187][INFO ][o.e.b.BootstrapChecks ] [wAVZzky] bound or publishing to a non-loopback address, enforcing bootstrap checks
- [2019-01-25T10:43:24,527][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}, added {{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300},{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301},}, reason: zen-disco-receive(from master [master {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300} committed version [188]])
- [2019-01-25T10:43:25,154][INFO ][o.e.h.n.Netty4HttpServerTransport] [wAVZzky] publish_address {10.231.83.146:9202}, bound_addresses {10.231.83.146:9202}
- [2019-01-25T10:43:25,154][INFO ][o.e.n.Node ] [wAVZzky] started
- [2019-01-25T10:44:30,286][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] master_left [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], reason [failed to ping, tried [3] times, each with maximum [4s] timeout]
- [2019-01-25T10:44:30,288][WARN ][o.e.d.z.ZenDiscovery ] [wAVZzky] master left (reason = failed to ping, tried [3] times, each with maximum [4s] timeout), current nodes: nodes:
- {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}, master
- {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}
- {wAVZzky}{wAVZzkyrSfyBmuGRw0Bltw}{Qc8kODytQreZFj_24t5O7g}{10.231.83.146}{10.231.83.146:9302}, local
- [2019-01-25T10:44:30,697][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:32,707][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:33,324][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, reason: zen-disco-receive(from master [master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301} committed version [200]])
- [2019-01-25T10:44:34,041][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [131] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:44:35,042][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [132] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:44:36,041][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [136] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:44:46,328][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] master_left [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], reason [failed to ping, tried [3] times, each with maximum [4s] timeout]
- [2019-01-25T10:44:46,329][WARN ][o.e.d.z.ZenDiscovery ] [wAVZzky] master left (reason = failed to ping, tried [3] times, each with maximum [4s] timeout), current nodes: nodes:
- {wAVZzky}{wAVZzkyrSfyBmuGRw0Bltw}{Qc8kODytQreZFj_24t5O7g}{10.231.83.146}{10.231.83.146:9302}, local
- {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, master
- {yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}
- [2019-01-25T10:44:46,777][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:48,781][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:50,795][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:52,804][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:54,815][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:56,824][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:44:58,834][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:45:00,844][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:45:02,854][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:45:03,339][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [29013ms] ago, timed out [25013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [141]
- [2019-01-25T10:45:03,340][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [25014ms] ago, timed out [21013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [144]
- [2019-01-25T10:45:03,340][WARN ][o.e.t.TransportService ] [wAVZzky] Received response for a request that has timed out, sent [21013ms] ago, timed out [17013ms] ago, action [internal:discovery/zen/fd/master_ping], node [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], id [147]
- [2019-01-25T10:45:04,863][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] no known master node, scheduling a retry
- [2019-01-25T10:45:06,340][INFO ][o.e.c.s.ClusterService ] [wAVZzky] detected_master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}, reason: zen-disco-receive(from master [master {lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301} committed version [201]])
- [2019-01-25T10:45:07,079][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [150] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:08,080][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [153] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:09,079][WARN ][o.e.d.z.UnicastZenPing ] [wAVZzky] failed to send ping to [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}]
- org.elasticsearch.transport.ReceiveTimeoutTransportException: [yRj9xlW][10.231.83.238:9300][internal:discovery/zen/unicast] request_id [156] timed out after [3750ms]
- at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- ^C[2019-01-25T10:45:13,846][INFO ][o.e.n.Node ] [wAVZzky] stopping ...
- [2019-01-25T10:45:13,852][INFO ][o.e.d.z.ZenDiscovery ] [wAVZzky] failed to send join request to master [{lJgJwYh}{lJgJwYhmQA-fucaOBHD5Vg}{lbN7QcyiSWCOBYNL4jNkdg}{10.231.83.146}{10.231.83.146:9301}], reason [IllegalStateException[Future got interrupted]; nested: InterruptedException; ]
- [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,857][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] connection exception while trying to forward request with action name [cluster:monitor/health] to master node [{yRj9xlW}{yRj9xlW8Q-WgQ0j6H4DbOQ}{SAn1BH7ES42OhsAknxVe0Q}{10.231.83.238}{10.231.83.238:9300}], scheduling a retry. Error: [org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,858][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [wAVZzky] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
- org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,859][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,860][WARN ][r.suppressed ] path: /_cluster/health, params: {}
- org.elasticsearch.discovery.MasterNotDiscoveredException: NodeDisconnectedException[[yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- Caused by: org.elasticsearch.transport.NodeDisconnectedException: [yRj9xlW][10.231.83.238:9300][cluster:monitor/health] disconnected
- [2019-01-25T10:45:13,865][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,865][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,864][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,863][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,868][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,864][ERROR][i.n.u.c.D.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
- java.util.concurrent.RejectedExecutionException: event executor terminated
- at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[?:?]
- at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) ~[?:?]
- at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113) ~[?:?]
- at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:1010) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:825) ~[?:?]
- at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[?:?]
- at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1027) ~[?:?]
- at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:301) ~[?:?]
- at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:147) ~[?:?]
- at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:447) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:139) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.retry(TransportMasterNodeAction.java:194) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.access$500(TransportMasterNodeAction.java:107) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:183) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1077) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:903) ~[elasticsearch-5.6.9.jar:5.6.9]
- at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.9.jar:5.6.9]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
- [2019-01-25T10:45:13,886][INFO ][o.e.n.Node ] [wAVZzky] stopped
- [2019-01-25T10:45:13,887][INFO ][o.e.n.Node ] [wAVZzky] closing ...
- [2019-01-25T10:45:13,892][INFO ][o.e.n.Node ] [wAVZzky] closed
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement