Advertisement
ronniepg

masternode02.log

Feb 10th, 2023
65
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 45.21 KB | Software | 0 0
  1. 2023-02-10T08:43:19,751 eduler][T#1] [W] org.ope.mon.jvm.JvmGcMonitorService - [UID=] - [gc][15145410] overhead, spent [956ms] collecting in the last [1s]
  2. 2023-02-10T08:51:12,229 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251905774}{true}{false}{false}{false}{cluster:monitor/state}}] took [7267ms] which is above the warn threshold of [5000ms]
  3. 2023-02-10T08:51:16,003 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [15305ms] ago, timed out [5186ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082954]
  4. 2023-02-10T08:51:16,406 worker][T#1] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148154551}{true}{false}{false}{false}{cluster:monitor/state}}] took [11812ms] which is above the warn threshold of [5000ms]
  5. 2023-02-10T08:51:17,606 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [17580ms] ago, timed out [7458ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082950]
  6. 2023-02-10T08:51:24,286 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [21585ms] ago, timed out [6469ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082958]
  7. 2023-02-10T08:51:26,104 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [26083ms] ago, timed out [15961ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082951]
  8. 2023-02-10T08:51:26,105 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [14934ms] ago, timed out [4531ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082965]
  9. 2023-02-10T08:51:26,109 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [23418ms] ago, timed out [8302ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082959]
  10. 2023-02-10T08:51:26,123 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [14934ms] ago, timed out [4942ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082966]
  11. 2023-02-10T08:51:26,681 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864811043}{true}{false}{false}{false}{cluster:monitor/state}}] took [9118ms] which is above the warn threshold of [5000ms]
  12. 2023-02-10T08:51:30,991 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [30361ms] ago, timed out [20242ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082955]
  13. 2023-02-10T08:51:30,992 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [19183ms] ago, timed out [9214ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082970]
  14. 2023-02-10T08:51:32,592 worker][T#1] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5890458992}{true}{false}{false}{false}{cluster:monitor/state}}] took [14354ms] which is above the warn threshold of [5000ms]
  15. 2023-02-10T08:51:32,963 gement][T#2] [W] .ope.clu.InternalClusterInfoService - [UID=] - Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
  16. 2023-02-10T08:51:39,678 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [22707ms] ago, timed out [12767ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082971]
  17. 2023-02-10T08:51:39,684 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [11724ms] ago, timed out [1671ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082982]
  18. 2023-02-10T08:51:40,526 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [39877ms] ago, timed out [29967ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082953]
  19. 2023-02-10T08:51:40,528 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [28699ms] ago, timed out [18730ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082967]
  20. 2023-02-10T08:51:40,528 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [17702ms] ago, timed out [7745ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082976]
  21. 2023-02-10T08:51:43,307 worker][T#1] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [40557ms] ago, timed out [25441ms] ago, action [cluster:monitor/nodes/stats[n]], node [{qosdatapm01.example.com}{BP_6HOjxSpWiT2ryPiYxdw}{I1U9htU7Sh63Yg5TvVQ3Wg}{qosdatapm01.example.com}{172.20.18.35:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082956]
  22. 2023-02-10T08:51:46,625 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251940444}{true}{false}{false}{false}{cluster:monitor/state}}] took [11449ms] which is above the warn threshold of [5000ms]
  23. 2023-02-10T08:51:46,651 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [18162ms] ago, timed out [8088ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082983]
  24. 2023-02-10T08:51:46,682 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [18162ms] ago, timed out [8088ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082984]
  25. 2023-02-10T08:52:01,924 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148198821}{true}{false}{false}{false}{cluster:monitor/state}}] took [18667ms] which is above the warn threshold of [5000ms]
  26. 2023-02-10T08:52:01,932 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [39091ms] ago, timed out [29134ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082975]
  27. 2023-02-10T08:52:12,186 worker][T#2] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [32941ms] ago, timed out [22773ms] ago, action [internal:coordination/fault_detection/follower_check], node [{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true}], id [101082991]
  28. 2023-02-10T08:52:12,727 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864850181}{true}{false}{false}{false}{cluster:monitor/state}}] took [25128ms] which is above the warn threshold of [5000ms]
  29. 2023-02-10T08:52:14,582 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [34894ms] ago, timed out [24690ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082993]
  30. 2023-02-10T08:52:14,588 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [35103ms] ago, timed out [24935ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082992]
  31. 2023-02-10T08:52:29,395 gement][T#4] [W] .ope.clu.InternalClusterInfoService - [UID=] - Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
  32. 2023-02-10T08:52:41,181 worker][T#2] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5864892525}{true}{false}{false}{false}{cluster:monitor/state}}] took [25723ms] which is above the warn threshold of [5000ms]
  33. 2023-02-10T08:52:42,695 worker][T#4] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5251973776}{true}{false}{false}{false}{cluster:monitor/state}}] took [25062ms] which is above the warn threshold of [5000ms]
  34. 2023-02-10T08:52:44,227 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [53323ms] ago, timed out [43380ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm04.example.com}{RfydkkDWQoqAdN3xx2coAw}{k-YyHuNEQvWWz5s27Nk3aw}{qosdatapm04.example.com}{172.20.18.38:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082995]
  35. 2023-02-10T08:52:44,256 worker][T#4] [W] org.ope.tra.TransportService - [UID=] - Received response for a request that has timed out, sent [53732ms] ago, timed out [43600ms] ago, action [internal:coordination/fault_detection/follower_check], node [{qosdatapm02.example.com}{-k1PLZcLTw2vhX18BQAwbw}{2N0Cu_tnQhC7xMKkQ-i9ww}{qosdatapm02.example.com}{172.20.18.36:9041}{d}{shard_indexing_pressure_enabled=true}], id [101082994]
  36. 2023-02-10T08:52:54,414 worker][T#3] [W] org.ope.tra.InboundHandler - [UID=] - handling inbound transport message [InboundMessage{Header{90}{1.2.4}{5148228707}{true}{false}{false}{false}{cluster:monitor/state}}] took [30785ms] which is above the warn threshold of [5000ms]
  37. 2023-02-10T08:52:54,435 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - took [1.3m], which is over [10s], to compute cluster state update for [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]
  38.  
  39. 2023-02-10T08:52:54,435 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - took [1.3m], which is over [10s], to compute cluster state update for [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]
  40. 2023-02-10T08:52:54,486 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [node-left[{qosdatapm03.example.com}{AfsN7i6SQK-12qQgX9j1ig}{FD6tmDOkT7ec_uFJhQgqbQ}{qosdatapm03.example.com}{172.20.18.37:9041}{d}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} reason: followers check retry count exceeded]]: failed to commit cluster state version [19174]
  41. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
  42. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
  43. at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
  44. at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
  45. at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
  46. at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
  47. at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
  48. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
  49. at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
  50. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
  51. at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
  52. at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
  53. at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  54. at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
  55. at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  56. at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
  57. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
  58. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  59. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  60. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  61. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  62. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  63. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  64. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  65. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  66. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  67. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  68. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  69. at java.lang.Thread.run(Thread.java:887) [?:?]
  70. Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
  71. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
  72. ... 19 more
  73. 2023-02-10T08:52:54,509 teTask][T#1] [E] org.ope.clu.coo.Coordinator - [UID=] - unexpected failure during [node-left]
  74. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
  75. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
  76. at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
  77. at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
  78. at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
  79. at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
  80. at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
  81. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
  82. at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
  83. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
  84. at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
  85. at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
  86. at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  87. at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
  88. at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  89. at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
  90. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
  91. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  92. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  93. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  94. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  95. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  96. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  97. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  98. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  99. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  100. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  101. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  102. at java.lang.Thread.run(Thread.java:887) [?:?]
  103. Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
  104. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
  105. ... 19 more
  106.  
  107. 2023-02-10T08:52:54,510 teTask][T#1] [E] org.ope.clu.coo.Coordinator - [UID=] - unexpected failure during [node-left]
  108. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
  109. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1681) ~[opensearch-1.2.4.jar:1.2.4]
  110. at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) ~[opensearch-1.2.4.jar:1.2.4]
  111. at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-1.2.4.jar:1.2.4]
  112. at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:296) ~[opensearch-1.2.4.jar:1.2.4]
  113. at org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:118) ~[opensearch-1.2.4.jar:1.2.4]
  114. at org.opensearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:80) ~[opensearch-1.2.4.jar:1.2.4]
  115. at org.opensearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1592) ~[opensearch-1.2.4.jar:1.2.4]
  116. at org.opensearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:138) ~[opensearch-1.2.4.jar:1.2.4]
  117. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:189) ~[opensearch-1.2.4.jar:1.2.4]
  118. at org.opensearch.cluster.coordination.Publication.access$500(Publication.java:55) ~[opensearch-1.2.4.jar:1.2.4]
  119. at org.opensearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:324) ~[opensearch-1.2.4.jar:1.2.4]
  120. at org.opensearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  121. at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
  122. at org.opensearch.cluster.coordination.Publication.onFaultyNode(Publication.java:106) ~[opensearch-1.2.4.jar:1.2.4]
  123. at org.opensearch.cluster.coordination.Publication.start(Publication.java:83) ~[opensearch-1.2.4.jar:1.2.4]
  124. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1303) ~[opensearch-1.2.4.jar:1.2.4]
  125. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  126. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  127. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  128. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  129. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  130. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  131. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  132. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  133. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  134. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  135. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  136. at java.lang.Thread.run(Thread.java:887) [?:?]
  137. Caused by: org.opensearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
  138. at org.opensearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:187) ~[opensearch-1.2.4.jar:1.2.4]
  139. ... 19 more
  140. 2023-02-10T08:52:54,802 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: failed to commit cluster state version [19174]
  141. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication
  142. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  143. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  144. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  145. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  146. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  147. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  148. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  149. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  150. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  151. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  152. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  153. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  154. at java.lang.Thread.run(Thread.java:887) [?:?]
  155. 2023-02-10T08:52:55,009 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  156. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 8 while handling publication
  157. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  158. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  159. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  160. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  161. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  162. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  163. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  164. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  165. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  166. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  167. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  168. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  169. at java.lang.Thread.run(Thread.java:887) [?:?]
  170.  
  171.  
  172. 2023-02-10T08:52:55,216 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  173. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 11 while handling publication
  174. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  175. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  176. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  177. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  178. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  179. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  180. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  181. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  182. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  183. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  184. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  185. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  186. at java.lang.Thread.run(Thread.java:887) [?:?]
  187. 2023-02-10T08:52:55,428 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  188. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 14 while handling publication
  189. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  190. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  191. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  192. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  193. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  194. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  195. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  196. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  197. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  198. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  199. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  200. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  201. at java.lang.Thread.run(Thread.java:887) [?:?]
  202.  
  203. 2023-02-10T08:52:55,620 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  204. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 16 while handling publication
  205. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  206. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  207. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  208. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  209. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  210. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  211. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  212. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  213. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  214. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  215. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  216. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  217. at java.lang.Thread.run(Thread.java:887) [?:?]
  218. 2023-02-10T08:52:55,814 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  219. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 18 while handling publication
  220. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  221. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  222. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  223. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  224. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  225. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  226. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  227. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  228. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  229. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  230. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  231. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  232. at java.lang.Thread.run(Thread.java:887) [?:?]
  233. 2023-02-10T08:52:56,010 teTask][T#1] [W] org.ope.clu.ser.MasterService - [UID=] - failing [elected-as-master ([2] nodes joined)[{masternode03.example.com}{aLlNr3otQIqjrGfcJgr1PA}{Vt7e2b1XQ_afLPLGeq7OPg}{masternode03.example.com}{xxx.xxx.xxx.252:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, {masternode02.example.com}{VeWJEcurTOOQ0qAOoHfppQ}{BcrkMr6aT9i9NWhZOu7dng}{masternode02.example.com}{xxx.xxx.xxx.251:9041}{m}{shard_indexing_pressure_enabled=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], node-join[{masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader, {masternode01.example.com}{TyestOnKSi22HxoORZfhLw}{cWB0_1xCTry1OIeSvtbLMQ}{masternode01.example.com}{xxx.xxx.xxx.250:9041}{m}{shard_indexing_pressure_enabled=true} join existing leader]]: failed to commit cluster state version [19174]
  234. org.opensearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 20 while handling publication
  235. at org.opensearch.cluster.coordination.Coordinator.publish(Coordinator.java:1257) ~[opensearch-1.2.4.jar:1.2.4]
  236. at org.opensearch.cluster.service.MasterService.publish(MasterService.java:303) [opensearch-1.2.4.jar:1.2.4]
  237. at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:285) [opensearch-1.2.4.jar:1.2.4]
  238. at org.opensearch.cluster.service.MasterService.access$000(MasterService.java:86) [opensearch-1.2.4.jar:1.2.4]
  239. at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:173) [opensearch-1.2.4.jar:1.2.4]
  240. at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:175) [opensearch-1.2.4.jar:1.2.4]
  241. at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:213) [opensearch-1.2.4.jar:1.2.4]
  242. at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.2.4.jar:1.2.4]
  243. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:275) [opensearch-1.2.4.jar:1.2.4]
  244. at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:238) [opensearch-1.2.4.jar:1.2.4]
  245. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  246. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  247. at java.lang.Thread.run(Thread.java:887) [?:?]
  248.  
  249.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement