Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2023-02-10T08:43:19,751 eduler][T#1] [W] org.ope.mon.jvm.JvmGcMonitorService - [UID=] - [gc][15145410] overhead, spent [956ms] collecting in the last [1s]
- 2023-02-10T08:51:12,229 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251905774}{true}{false}{false}{false}{cluster:monitor/state}}] took [7267ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:51:16,003 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [15305ms] ago, timed out [5186ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082954]
- 2023-02-10T08:51:16,406 worker][T#1] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148154551}{true}{false}{false}{false}{cluster:monitor/state}}] took [11812ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:51:17,606 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [17580ms] ago, timed out [7458ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082950]
- 2023-02-10T08:51:24,286 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [21585ms] ago, timed out [6469ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082958]
- 2023-02-10T08:51:26,104 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [26083ms] ago, timed out [15961ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082951]
- 2023-02-10T08:51:26,105 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [14934ms] ago, timed out [4531ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082965]
- 2023-02-10T08:51:26,109 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [23418ms] ago, timed out [8302ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082959]
- 2023-02-10T08:51:26,123 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [14934ms] ago, timed out [4942ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082966]
- 2023-02-10T08:51:26,681 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864811043}{true}{false}{false}{false}{cluster:monitor/state}}] took [9118ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:51:30,991 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [30361ms] ago, timed out [20242ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082955]
- 2023-02-10T08:51:30,992 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [19183ms] ago, timed out [9214ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082970]
- 2023-02-10T08:51:32,592 worker][T#1] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5890458992}{true}{false}{false}{false}{cluster:monitor/state}}] took [14354ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:51:32,963 gement][T#2] [W] .ope.clu.InternalClusterInfoService - [UID=] - Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
- 2023-02-10T08:51:39,678 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [22707ms] ago, timed out [12767ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082971]
- 2023-02-10T08:51:39,684 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [11724ms] ago, timed out [1671ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082982]
- 2023-02-10T08:51:40,526 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [39877ms] ago, timed out [29967ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082953]
- 2023-02-10T08:51:40,528 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [28699ms] ago, timed out [18730ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082967]
- 2023-02-10T08:51:40,528 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [17702ms] ago, timed out [7745ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082976]
- 2023-02-10T08:51:43,307 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [40557ms] ago, timed out [25441ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm01.example.com}{BP_6HOjxSpWiT2ryPiYxdw}{I1U9htU7Sh63Yg5TvVQ3Wg}{qosdatapm01.example.com}{172.20.18.35:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082956]
- 2023-02-10T08:51:46,625 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251940444}{true}{false}{false}{false}{cluster:monitor/state}}] took [11449ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:51:46,651 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [18162ms] ago, timed out [8088ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082983]
- 2023-02-10T08:51:46,682 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [18162ms] ago, timed out [8088ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082984]
- 2023-02-10T08:52:01,924 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148198821}{true}{false}{false}{false}{cluster:monitor/state}}] took [18667ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:52:01,932 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [39091ms] ago, timed out [29134ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082975]
- 2023-02-10T08:52:12,186 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [32941ms] ago, timed out [22773ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082991]
- 2023-02-10T08:52:12,727 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864850181}{true}{false}{false}{false}{cluster:monitor/state}}] took [25128ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:52:14,582 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [34894ms] ago, timed out [24690ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082993]
- 2023-02-10T08:52:14,588 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [35103ms] ago, timed out [24935ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082992]
- 2023-02-10T08:52:29,395 gement][T#4] [W] .ope.clu.InternalClusterInfoService - [UID=] - Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
- 2023-02-10T08:52:41,181 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864892525}{true}{false}{false}{false}{cluster:monitor/state}}] took [25723ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:52:42,695 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251973776}{true}{false}{false}{false}{cluster:monitor/state}}] took [25062ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:52:44,227 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [53323ms] ago, timed out [43380ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082995]
- 2023-02-10T08:52:44,256 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [53732ms] ago, timed out [43600ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082994]
- 2023-02-10T08:52:54,414 worker][T#3] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148228707}{true}{false}{false}{false}{cluster:monitor/state}}] took [30785ms] which is above the warn threshold of [5000ms]
- 2023-02-10T08:52:54,435 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - took [1.3m], which is over [10s], to compute cluster state update for [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]
- 2023-02-10T08:52:54,435 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - took [1.3m], which is over [10s], to compute cluster state update for [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]
- 2023-02-10T08:52:54,486 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
- at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
- ... 19 more
- 2023-02-10T08:52:54,509 teTask][T#1] [E] org.ope.clu.coo.Coordinator - [UID=] - unexpected failure during [node-left]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
- at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
- ... 19 more
- 2023-02-10T08:52:54,510 teTask][T#1] [E] org.ope.clu.coo.Coordinator - [UID=] - unexpected failure during [node-left]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
- at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
- at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
- ... 19 more
- 2023-02-10T08:52:54,802 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:55,009 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 8 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:55,216 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 11 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:55,428 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 14 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:55,620 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 16 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:55,814 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 18 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
- 2023-02-10T08:52:56,010 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
- org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 20 while handling publication
- at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
- at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
- at java.lang.Thread.run(Thread.java:887) [?:?]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement