Advertisement
Guest User

corosync.log

a guest
Jan 26th, 2011
305
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 26.36 KB | None | 0 0
  1. # cat /var/log/cluster/corosync.log
  2. Jan 26 11:50:11 corosync [MAIN ] Corosync Cluster Engine ('1.3.0'): started and ready to provide service.
  3. Jan 26 11:50:11 corosync [MAIN ] Corosync built-in features: nss rdma
  4. Jan 26 11:50:11 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
  5. Jan 26 11:50:11 corosync [TOTEM ] Initializing transport (UDP/IP Unicast).
  6. Jan 26 11:50:11 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
  7. Jan 26 11:50:11 corosync [TOTEM ] Initializing transport (UDP/IP Unicast).
  8. Jan 26 11:50:11 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
  9. Jan 26 11:50:11 corosync [TOTEM ] The network interface [10.0.2.11] is now up.
  10. Jan 26 11:50:11 corosync [pcmk ] info: process_ais_conf: Reading configure
  11. Jan 26 11:50:11 corosync [pcmk ] info: config_find_init: Local handle: 2697991128409440260 for logging
  12. Jan 26 11:50:11 corosync [pcmk ] info: config_find_next: Processing additional logging options...
  13. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found 'off' for option: debug
  14. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found 'yes' for option: to_logfile
  15. Set r/w permissions for uid=0, gid=0 on /var/log/cluster/corosync.log
  16. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found '/var/log/cluster/corosync.log' for option: logfile
  17. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found 'no' for option: to_syslog
  18. Jan 26 11:50:11 corosync [pcmk ] info: process_ais_conf: User configured file based logging and explicitly disabled syslog.
  19. Jan 26 11:50:11 corosync [pcmk ] info: config_find_init: Local handle: 7114519016932114437 for service
  20. Jan 26 11:50:11 corosync [pcmk ] info: config_find_next: Processing additional service options...
  21. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
  22. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found 'no' for option: use_logd
  23. Jan 26 11:50:11 corosync [pcmk ] info: get_config_opt: Found 'yes' for option: use_mgmtd
  24. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
  25. Jan 26 11:50:11 corosync [pcmk ] Logging: Initialized pcmk_startup
  26. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
  27. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_startup: Service: 9
  28. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_startup: Local hostname: cluster1
  29. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_update_nodeid: Local node id: 184680458
  30. Jan 26 11:50:11 corosync [pcmk ] info: update_member: Creating entry for node 184680458 born on 0
  31. Jan 26 11:50:11 corosync [pcmk ] info: update_member: 0x2aaaac002180 Node 184680458 now known as cluster1 (was: (null))
  32. Jan 26 11:50:11 corosync [pcmk ] info: update_member: Node cluster1 now has 1 quorum votes (was 0)
  33. Jan 26 11:50:11 corosync [pcmk ] info: update_member: Node 184680458/cluster1 is now: member
  34. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15607 for process stonithd
  35. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15608 for process cib
  36. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15609 for process lrmd
  37. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: G_main_add_SignalHandler: Added signal handler for signal 10
  38. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: G_main_add_SignalHandler: Added signal handler for signal 12
  39. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: crm_cluster_connect: Connecting to OpenAIS
  40. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: init_ais_connection_once: Creating connection to our AIS plugin
  41. Jan 26 11:50:11 cluster1 cib: [15608]: info: Invoked: /usr/lib64/heartbeat/cib
  42. Jan 26 11:50:11 cluster1 cib: [15608]: info: G_main_add_TriggerHandler: Added signal manual handler
  43. Jan 26 11:50:11 cluster1 cib: [15608]: info: G_main_add_SignalHandler: Added signal handler for signal 17
  44. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15610 for process attrd
  45. Jan 26 11:50:11 cluster1 lrmd: [15097]: info: lrmd is shutting down
  46. Jan 26 11:50:11 cluster1 lrmd: [15609]: info: Signal sent to pid=15097, waiting for process to exit
  47. Jan 26 11:50:11 cluster1 cib: [15608]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
  48. Jan 26 11:50:11 cluster1 cib: [15608]: WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
  49. Jan 26 11:50:11 cluster1 cib: [15608]: WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
  50. Jan 26 11:50:11 cluster1 cib: [15608]: WARN: readCibXmlFile: Continuing with an empty configuration.
  51. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15611 for process pengine
  52. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15612 for process crmd
  53. Jan 26 11:50:11 cluster1 attrd: [15610]: info: Invoked: /usr/lib64/heartbeat/attrd
  54. Jan 26 11:50:11 cluster1 attrd: [15610]: info: main: Starting up
  55. Jan 26 11:50:11 cluster1 attrd: [15610]: info: crm_cluster_connect: Connecting to OpenAIS
  56. Jan 26 11:50:11 cluster1 attrd: [15610]: info: init_ais_connection_once: Creating connection to our AIS plugin
  57. Jan 26 11:50:11 corosync [pcmk ] info: spawn_child: Forked child 15613 for process mgmtd
  58. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: Pacemaker Cluster Manager 1.0.9
  59. Jan 26 11:50:11 cluster1 pengine: [15611]: info: Invoked: /usr/lib64/heartbeat/pengine
  60. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync extended virtual synchrony service
  61. Jan 26 11:50:11 cluster1 pengine: [15611]: info: main: Starting pengine
  62. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync configuration service
  63. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync cluster closed process group service v1.01
  64. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync cluster config database access v1.01
  65. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync profile loading service
  66. Jan 26 11:50:11 cluster1 crmd: [15612]: info: Invoked: /usr/lib64/heartbeat/crmd
  67. Jan 26 11:50:11 cluster1 crmd: [15612]: info: main: CRM Hg Version: da7075976b5ff0bee71074385f8fd02f296ec8a3
  68.  
  69. Jan 26 11:50:11 corosync [SERV ] Service engine loaded: corosync cluster quorum service v0.1
  70. Jan 26 11:50:11 corosync [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.
  71. Jan 26 11:50:11 cluster1 crmd: [15612]: info: crmd_init: Starting crmd
  72. Jan 26 11:50:11 corosync [TOTEM ] The network interface [192.168.146.11] is now up.
  73. Jan 26 11:50:11 cluster1 crmd: [15612]: info: G_main_add_SignalHandler: Added signal handler for signal 17
  74. Jan 26 11:50:11 cluster1 cib: [15608]: info: startCib: CIB Initialization completed successfully
  75. Jan 26 11:50:11 cluster1 cib: [15608]: info: crm_cluster_connect: Connecting to OpenAIS
  76. Jan 26 11:50:11 cluster1 cib: [15608]: info: init_ais_connection_once: Creating connection to our AIS plugin
  77. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: init_ais_connection_once: AIS connection established
  78. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0x12108f00 for stonithd/15607
  79. Jan 26 11:50:11 cluster1 attrd: [15610]: info: init_ais_connection_once: AIS connection established
  80. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: get_ais_nodeid: Server details: id=184680458 uname=cluster1 cname=pcmk
  81. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: crm_new_peer: Node cluster1 now has id: 184680458
  82. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: crm_new_peer: Node 184680458 is now known as cluster1
  83. Jan 26 11:50:11 cluster1 stonithd: [15607]: notice: /usr/lib64/heartbeat/stonithd start up successfully.
  84. Jan 26 11:50:11 cluster1 stonithd: [15607]: info: G_main_add_SignalHandler: Added signal handler for signal 17
  85. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0x1210d260 for attrd/15610
  86. Jan 26 11:50:11 cluster1 cib: [15608]: info: init_ais_connection_once: AIS connection established
  87. Jan 26 11:50:11 cluster1 attrd: [15610]: info: get_ais_nodeid: Server details: id=184680458 uname=cluster1 cname=pcmk
  88. Jan 26 11:50:11 cluster1 attrd: [15610]: info: crm_new_peer: Node cluster1 now has id: 184680458
  89. Jan 26 11:50:11 cluster1 attrd: [15610]: info: crm_new_peer: Node 184680458 is now known as cluster1
  90. Jan 26 11:50:11 cluster1 attrd: [15610]: info: main: Cluster connection active
  91. Jan 26 11:50:11 cluster1 attrd: [15610]: info: main: Accepting attribute updates
  92. Jan 26 11:50:11 cluster1 attrd: [15610]: info: main: Starting mainloop...
  93. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0x12111bf0 for cib/15608
  94. Jan 26 11:50:11 corosync [pcmk ] info: update_member: Node cluster1 now has process list: 00000000000000000000000000053312 (340754)
  95. Jan 26 11:50:11 corosync [pcmk ] info: pcmk_ipc: Sending membership update 0 to cib
  96. Jan 26 11:50:11 cluster1 cib: [15608]: info: get_ais_nodeid: Server details: id=184680458 uname=cluster1 cname=pcmk
  97. Jan 26 11:50:11 cluster1 cib: [15608]: info: crm_new_peer: Node cluster1 now has id: 184680458
  98. Jan 26 11:50:11 cluster1 cib: [15608]: info: crm_new_peer: Node 184680458 is now known as cluster1
  99. Jan 26 11:50:11 cluster1 cib: [15608]: info: cib_init: Starting cib mainloop
  100. Jan 26 11:50:11 cluster1 cib: [15608]: info: ais_dispatch: Membership 0: quorum still lost
  101. Jan 26 11:50:11 cluster1 cib: [15608]: info: crm_update_peer: Node cluster1: id=184680458 state=member (new) addr=(null) votes=1 (new) born=0 seen=0 proc=00000000000000000000000000053312 (new)
  102. Jan 26 11:50:11 cluster1 cib: [15617]: info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: b149d75e883440067418d44b3574c563)
  103. Jan 26 11:50:11 cluster1 cib: [15617]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Q8MCKw (digest: /var/lib/heartbeat/crm/cib.3AFIvU)
  104. Jan 26 11:50:12 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=15613, rc=100)
  105. Jan 26 11:50:12 corosync [pcmk ] notice: pcmk_wait_dispatch: Child process mgmtd no longer wishes to be respawned
  106. Jan 26 11:50:12 corosync [pcmk ] info: update_member: Node cluster1 now has process list: 00000000000000000000000000013312 (78610)
  107. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: G_main_add_SignalHandler: Added signal handler for signal 15
  108. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: G_main_add_SignalHandler: Added signal handler for signal 17
  109. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: enabling coredumps
  110. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: G_main_add_SignalHandler: Added signal handler for signal 10
  111. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: G_main_add_SignalHandler: Added signal handler for signal 12
  112. Jan 26 11:50:12 cluster1 lrmd: [15609]: info: Started.
  113. Jan 26 11:50:12 cluster1 crmd: [15612]: info: do_cib_control: CIB connection established
  114. Jan 26 11:50:12 cluster1 crmd: [15612]: info: crm_cluster_connect: Connecting to OpenAIS
  115. Jan 26 11:50:12 cluster1 crmd: [15612]: info: init_ais_connection_once: Creating connection to our AIS plugin
  116. Jan 26 11:50:12 cluster1 crmd: [15612]: info: init_ais_connection_once: AIS connection established
  117. Jan 26 11:50:12 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0x121181e0 for crmd/15612
  118. Jan 26 11:50:12 corosync [pcmk ] info: pcmk_ipc: Sending membership update 0 to crmd
  119. Jan 26 11:50:12 cluster1 crmd: [15612]: info: get_ais_nodeid: Server details: id=184680458 uname=cluster1 cname=pcmk
  120. Jan 26 11:50:12 cluster1 crmd: [15612]: info: crm_new_peer: Node cluster1 now has id: 184680458
  121. Jan 26 11:50:12 cluster1 crmd: [15612]: info: crm_new_peer: Node 184680458 is now known as cluster1
  122. Jan 26 11:50:12 cluster1 crmd: [15612]: info: do_ha_control: Connected to the cluster
  123. Jan 26 11:50:12 cluster1 crmd: [15612]: info: do_started: Delaying start, CCM (0000000000100000) not connected
  124. Jan 26 11:50:12 cluster1 crmd: [15612]: info: crmd_init: Starting crmd's mainloop
  125. Jan 26 11:50:12 cluster1 crmd: [15612]: info: config_query_callback: Checking for expired actions every 900000ms
  126. Jan 26 11:50:12 cluster1 crmd: [15612]: info: config_query_callback: Sending expected-votes=2 to corosync
  127. Jan 26 11:50:12 cluster1 crmd: [15612]: info: ais_dispatch: Membership 0: quorum still lost
  128. Jan 26 11:50:12 cluster1 crmd: [15612]: info: crm_update_peer: Node cluster1: id=184680458 state=member (new) addr=(null) votes=1 (new) born=0 seen=0 proc=00000000000000000000000000013312 (new)
  129. Jan 26 11:50:12 cluster1 crmd: [15612]: info: do_started: The local CRM is operational
  130. Jan 26 11:50:12 cluster1 crmd: [15612]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  131. Jan 26 11:50:13 cluster1 crmd: [15612]: info: ais_dispatch: Membership 0: quorum still lost
  132. Jan 26 11:50:16 cluster1 attrd: [15610]: info: cib_connect: Connected to the CIB after 1 signon attempts
  133. Jan 26 11:50:16 cluster1 attrd: [15610]: info: cib_connect: Sending full refresh
  134. Jan 26 11:51:13 cluster1 crmd: [15612]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
  135. Jan 26 11:51:13 cluster1 crmd: [15612]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
  136. Jan 26 11:51:13 cluster1 crmd: [15612]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
  137. Jan 26 11:53:13 cluster1 crmd: [15612]: ERROR: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped!
  138. Jan 26 11:53:13 cluster1 crmd: [15612]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  139. Jan 26 11:53:13 cluster1 crmd: [15612]: info: do_te_control: Registering TE UUID: 8c264486-9909-4d22-9266-2cd6ffd13752
  140. Jan 26 11:53:13 cluster1 crmd: [15612]: WARN: cib_client_add_notify_callback: Callback already present
  141. Jan 26 11:53:13 cluster1 crmd: [15612]: info: set_graph_functions: Setting custom graph functions
  142. Jan 26 11:53:13 cluster1 crmd: [15612]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
  143. Jan 26 11:53:13 cluster1 crmd: [15612]: info: do_dc_takeover: Taking over DC status for this partition
  144. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_readwrite: We are now in R/W mode
  145. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.0.0): ok (rc=0)
  146. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="0" num_updates="0" />
  147. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib crm_feature_set="3.0.1" admin_epoch="0" epoch="1" num_updates="1" />
  148. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.1.1): ok (rc=0)
  149. Jan 26 11:53:13 cluster1 cib: [15659]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-0.raw
  150. Jan 26 11:53:13 cluster1 cib: [15659]: info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: 9e46084072611ea63a9897a3f228af28)
  151. Jan 26 11:53:13 cluster1 cib: [15659]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Yi0lzd (digest: /var/lib/heartbeat/crm/cib.Vn7a9h)
  152. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="1" num_updates="1" />
  153. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="2" num_updates="1" >
  154. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <configuration >
  155. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <crm_config >
  156. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top" >
  157. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" />
  158. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cluster_property_set>
  159. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </crm_config>
  160. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </configuration>
  161. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cib>
  162. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.2.1): ok (rc=0)
  163. Jan 26 11:53:13 cluster1 crmd: [15612]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
  164. Jan 26 11:53:13 cluster1 crmd: [15612]: info: ais_dispatch: Membership 0: quorum still lost
  165. Jan 26 11:53:13 cluster1 cib: [15660]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-1.raw
  166. Jan 26 11:53:13 cluster1 cib: [15660]: info: write_cib_contents: Wrote version 0.2.0 of the CIB to disk (digest: dbb3a01fbbb3b52490037ee4582d3b0a)
  167. Jan 26 11:53:13 cluster1 cib: [15660]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Hp3mBd (digest: /var/lib/heartbeat/crm/cib.zqdddi)
  168. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="2" num_updates="1" />
  169. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="3" num_updates="1" >
  170. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <configuration >
  171. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <crm_config >
  172. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  173. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais" __crm_diff_marker__="added:top" />
  174. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cluster_property_set>
  175. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </crm_config>
  176. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </configuration>
  177. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cib>
  178. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/13, version=0.3.1): ok (rc=0)
  179. Jan 26 11:53:13 cluster1 crmd: [15612]: info: crm_ais_dispatch: Setting expected votes to 2
  180. Jan 26 11:53:13 cluster1 crmd: [15612]: info: config_query_callback: Checking for expired actions every 900000ms
  181. Jan 26 11:53:13 cluster1 crmd: [15612]: info: config_query_callback: Sending expected-votes=2 to corosync
  182. Jan 26 11:53:13 cluster1 crmd: [15612]: info: ais_dispatch: Membership 0: quorum still lost
  183. Jan 26 11:53:13 cluster1 cib: [15661]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-2.raw
  184. Jan 26 11:53:13 cluster1 cib: [15661]: info: write_cib_contents: Wrote version 0.3.0 of the CIB to disk (digest: 169432f7f7051a4c4957a829e47e9c80)
  185. Jan 26 11:53:13 cluster1 cib: [15661]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.yXcACd (digest: /var/lib/heartbeat/crm/cib.ZkFDfi)
  186. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="3" num_updates="1" />
  187. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="4" num_updates="1" >
  188. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <configuration >
  189. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <crm_config >
  190. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  191. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2" __crm_diff_marker__="added:top" />
  192. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cluster_property_set>
  193. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </crm_config>
  194. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </configuration>
  195. Jan 26 11:53:13 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cib>
  196. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=0.4.1): ok (rc=0)
  197. Jan 26 11:53:13 cluster1 crmd: [15612]: info: crm_ais_dispatch: Setting expected votes to 2
  198. Jan 26 11:53:13 cluster1 crmd: [15612]: info: te_connect_stonith: Attempting connection to fencing daemon...
  199. Jan 26 11:53:13 cluster1 cib: [15662]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-3.raw
  200. Jan 26 11:53:13 cluster1 cib: [15662]: info: write_cib_contents: Wrote version 0.4.0 of the CIB to disk (digest: e9cc6693b20869dba8e0e1baa58c8aaf)
  201. Jan 26 11:53:13 cluster1 cib: [15662]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Jf6NDd (digest: /var/lib/heartbeat/crm/cib.byr5hi)
  202. Jan 26 11:53:13 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/19, version=0.4.1): ok (rc=0)
  203. Jan 26 11:53:14 cluster1 crmd: [15612]: info: te_connect_stonith: Connected
  204.  
  205. crm_mon shows
  206.  
  207. Stack: openais
  208. Current DC: NONE
  209. 0 Nodes configured, 2 expected votes
  210. 0 Resources configured.
  211.  
  212. Added stonith-enabled="false" and no-quorum-policy="ignore"
  213.  
  214. Logs after commit
  215.  
  216. Jan 26 11:56:13 cluster1 crmd: [15612]: ERROR: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped!
  217. Jan 26 11:56:13 cluster1 crmd: [15612]: info: crm_timer_popped: Welcomed: 1, Integrated: 0
  218. Jan 26 11:56:13 cluster1 crmd: [15612]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  219. Jan 26 11:56:13 cluster1 crmd: [15612]: WARN: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  220. Jan 26 11:56:13 cluster1 crmd: [15612]: WARN: do_state_transition: 1 cluster nodes failed to respond to the join offer.
  221. Jan 26 11:56:13 cluster1 crmd: [15612]: info: ghash_print_node: Welcome reply not received from: cluster1 1
  222. Jan 26 11:56:13 cluster1 crmd: [15612]: WARN: do_log: FSA: Input I_ELECTION_DC from do_dc_join_finalize() received in state S_FINALIZE_JOIN
  223. Jan 26 11:56:13 cluster1 crmd: [15612]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_dc_join_finalize ]
  224. Jan 26 11:56:13 cluster1 crmd: [15612]: info: do_dc_join_offer_all: join-2: Waiting on 1 outstanding join acks
  225. Jan 26 11:56:41 cluster1 cib: [15608]: info: cib_replace_notify: Local-only Replace: 0.6.1 from <null>
  226. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib epoch="4" admin_epoch="0" num_updates="1" />
  227. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib epoch="6" admin_epoch="0" num_updates="1" >
  228. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <configuration >
  229. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <crm_config >
  230. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  231. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false" __crm_diff_marker__="added:top" />
  232. Jan 26 11:56:41 cluster1 attrd: [15610]: info: do_cib_replaced: Sending full refresh
  233. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" __crm_diff_marker__="added:top" />
  234. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cluster_property_set>
  235. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </crm_config>
  236. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </configuration>
  237. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cib>
  238. Jan 26 11:56:41 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/crm_shadow/2, version=0.6.1): ok (rc=0)
  239. Jan 26 11:56:41 cluster1 crmd: [15612]: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  240. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="6" num_updates="1" />
  241. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="7" num_updates="1" >
  242. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <configuration >
  243. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <nodes >
  244. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + <node id="cluster1" uname="cluster1" type="normal" __crm_diff_marker__="added:top" />
  245. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </nodes>
  246. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </configuration>
  247. Jan 26 11:56:41 cluster1 cib: [15608]: info: log_data_element: cib:diff: + </cib>
  248. Jan 26 11:56:41 cluster1 cib: [15608]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/20, version=0.7.1): ok (rc=0)
  249. Jan 26 11:56:41 cluster1 cib: [15713]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-4.raw
  250. Jan 26 11:56:41 cluster1 cib: [15713]: info: write_cib_contents: Wrote version 0.7.0 of the CIB to disk (digest: a6edd47aec366a94fef92509fcb352b3)
  251. Jan 26 11:56:41 cluster1 cib: [15713]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Kp1rP0 (digest: /var/lib/heartbeat/crm/cib.5N9mFS)
  252.  
  253. crm_mon
  254.  
  255. Stack: openais
  256. Current DC: NONE
  257. 1 Nodes configured, 2 expected votes
  258. 0 Resources configured.
  259. ============
  260.  
  261. OFFLINE: [ cluster1 ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement