Advertisement
Guest User

Untitled

a guest
May 7th, 2011
136
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 28.70 KB | None | 0 0
  1. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: handle_shutdown_request: Creating shutdown request for deb-cluster02 (state=S_IDLE)
  2. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=deb-cluster02, magic=NA, cib=0.59.11) : Transient attribute: update
  3. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  4. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
  5. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_pe_invoke: Query 61: Requesting the current CIB: S_POLICY_ENGINE
  6. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_pe_invoke_callback: Invoking the PE: query=61, ref=pe_calc-dc-1304774827-30, seq=72, quorate=1
  7. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  8. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster01 is online
  9. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster02 is shutting down
  10. May 07 15:27:07 deb-cluster01 pengine: [1100]: notice: native_print: failover-ip (ocf::heartbeat:IPaddr2): Started deb-cluster02
  11. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: native_color: Resource failover-ip cannot run anywhere
  12. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: stage6: Scheduling Node deb-cluster02 for shutdown
  13. May 07 15:27:07 deb-cluster01 pengine: [1100]: notice: LogActions: Stop resource failover-ip (deb-cluster02)
  14. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  15. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: unpack_graph: Unpacked transition 4: 3 actions in 3 synapses
  16. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1304774827-30) derived from /var/lib/pengine/pe-input-40.bz2
  17. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_rsc_command: Initiating action 6: stop failover-ip_stop_0 on deb-cluster02
  18. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: process_pe_message: Transition 4: PEngine Input stored in: /var/lib/pengine/pe-input-40.bz2
  19. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: match_graph_event: Action failover-ip_stop_0 (6) confirmed on deb-cluster02 (rc=0)
  20. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
  21. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_crm_command: Executing crm-event (9): do_shutdown on deb-cluster02
  22. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: run_graph: ====================================================
  23. May 07 15:27:07 deb-cluster01 crmd: [1101]: notice: run_graph: Transition 4 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-40.bz2): Complete
  24. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_graph_trigger: Transition 4 is now complete
  25. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: notify_crmd: Transition 4 status: done - <null>
  26. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  27. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: Starting PEngine Recheck Timer
  28. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_shutdown_req: Shutdown REQ from deb-cluster02
  29. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_shutdown_req for section 'all' (origin=deb-cluster02/deb-cluster02/(null), version=0.59.12): ok (rc=0)
  30. May 07 15:27:07 corosync [pcmk ] info: update_member: Node deb-cluster02 now has process list: 00000000000000000000000000000002 (2)
  31. May 07 15:27:07 corosync [pcmk ] info: send_member_notification: Sending membership update 72 to 2 children
  32. May 07 15:27:07 deb-cluster01 cib: [1097]: info: ais_dispatch: Membership 72: quorum retained
  33. May 07 15:27:07 deb-cluster01 cib: [1097]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=72 proc=00000000000000000000000000000002 (new)
  34. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: ais_dispatch: Membership 72: quorum retained
  35. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=72 proc=00000000000000000000000000000002 (new)
  36. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/62, version=0.59.12): ok (rc=0)
  37. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: crm_ais_dispatch: Setting expected votes to 2
  38. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/65, version=0.59.13): ok (rc=0)
  39. May 07 15:27:07 corosync [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 76: memb=1, new=0, lost=1
  40. May 07 15:27:07 corosync [pcmk ] info: pcmk_peer_update: memb: deb-cluster01 4211212298
  41. May 07 15:27:07 corosync [pcmk ] info: pcmk_peer_update: lost: deb-cluster02 4227989514
  42. May 07 15:27:07 corosync [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 76: memb=1, new=0, lost=0
  43. May 07 15:27:07 corosync [pcmk ] info: pcmk_peer_update: MEMB: deb-cluster01 4211212298
  44. May 07 15:27:07 corosync [pcmk ] info: ais_mark_unseen_peer_dead: Node deb-cluster02 was not seen in the previous transition
  45. May 07 15:27:07 corosync [pcmk ] info: update_member: Node 4227989514/deb-cluster02 is now: lost
  46. May 07 15:27:07 corosync [pcmk ] info: send_member_notification: Sending membership update 76 to 2 children
  47. May 07 15:27:07 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
  48. May 07 15:27:07 corosync [MAIN ] Completed service synchronization, ready to provide service.
  49. May 07 15:27:07 deb-cluster01 cib: [1097]: notice: ais_dispatch: Membership 76: quorum lost
  50. May 07 15:27:07 deb-cluster01 cib: [1097]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=lost (new) addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=72 proc=00000000000000000000000000000002
  51. May 07 15:27:07 deb-cluster01 crmd: [1101]: notice: ais_dispatch: Membership 76: quorum lost
  52. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: ais_status_callback: status: deb-cluster02 is now lost (was member)
  53. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=lost (new) addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=72 proc=00000000000000000000000000000002
  54. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: erase_node_from_join: Removed node deb-cluster02 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1
  55. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/66, version=0.59.13): ok (rc=0)
  56. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: crm_update_quorum: Updating quorum status to false (call=68)
  57. May 07 15:27:07 deb-cluster01 cib: [1097]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="59" num_updates="14" />
  58. May 07 15:27:07 deb-cluster01 cib: [1097]: info: log_data_element: cib:diff: + <cib have-quorum="0" admin_epoch="0" epoch="60" num_updates="1" />
  59. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/68, version=0.60.1): ok (rc=0)
  60. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: crm_ais_dispatch: Setting expected votes to 2
  61. May 07 15:27:07 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/70, version=0.60.1): ok (rc=0)
  62. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
  63. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: need_abort: Aborting on change to have-quorum
  64. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  65. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
  66. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_pe_invoke: Query 71: Requesting the current CIB: S_POLICY_ENGINE
  67. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_pe_invoke_callback: Invoking the PE: query=71, ref=pe_calc-dc-1304774827-35, seq=76, quorate=0
  68. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  69. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_graph_trigger: Transition 4 is now complete
  70. May 07 15:27:07 deb-cluster01 pengine: [1100]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
  71. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: notify_crmd: Transition 4 status: done - <null>
  72. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster01 is online
  73. May 07 15:27:07 deb-cluster01 pengine: [1100]: notice: native_print: failover-ip (ocf::heartbeat:IPaddr2): Stopped
  74. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: native_color: Resource failover-ip cannot run anywhere
  75. May 07 15:27:07 deb-cluster01 pengine: [1100]: notice: LogActions: Leave resource failover-ip (Stopped)
  76. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  77. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: unpack_graph: Unpacked transition 5: 0 actions in 0 synapses
  78. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1304774827-35) derived from /var/lib/pengine/pe-input-41.bz2
  79. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: run_graph: ====================================================
  80. May 07 15:27:07 deb-cluster01 crmd: [1101]: notice: run_graph: Transition 5 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-41.bz2): Complete
  81. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: te_graph_trigger: Transition 5 is now complete
  82. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: notify_crmd: Transition 5 status: done - <null>
  83. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  84. May 07 15:27:07 deb-cluster01 crmd: [1101]: info: do_state_transition: Starting PEngine Recheck Timer
  85. May 07 15:27:07 deb-cluster01 cib: [3554]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-53.raw
  86. May 07 15:27:07 deb-cluster01 pengine: [1100]: info: process_pe_message: Transition 5: PEngine Input stored in: /var/lib/pengine/pe-input-41.bz2
  87. May 07 15:27:07 deb-cluster01 cib: [3554]: info: write_cib_contents: Wrote version 0.60.0 of the CIB to disk (digest: 4a9b0188844f9b1ec538762a872d90ac)
  88. May 07 15:27:07 deb-cluster01 cib: [3554]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.dFnl2E (digest: /var/lib/heartbeat/crm/cib.QypmPR)
  89. May 07 15:27:20 deb-cluster01 cib: [1097]: info: cib_stats: Processed 156 operations (2051.00us average, 0% utilization) in the last 10min
  90. May 07 15:28:11 corosync [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 80: memb=1, new=0, lost=0
  91. May 07 15:28:11 corosync [pcmk ] info: pcmk_peer_update: memb: deb-cluster01 4211212298
  92. May 07 15:28:11 corosync [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 80: memb=2, new=1, lost=0
  93. May 07 15:28:11 corosync [pcmk ] info: update_member: Node 4227989514/deb-cluster02 is now: member
  94. May 07 15:28:11 corosync [pcmk ] info: pcmk_peer_update: NEW: deb-cluster02 4227989514
  95. May 07 15:28:11 corosync [pcmk ] info: pcmk_peer_update: MEMB: deb-cluster01 4211212298
  96. May 07 15:28:11 corosync [pcmk ] info: pcmk_peer_update: MEMB: deb-cluster02 4227989514
  97. May 07 15:28:11 corosync [pcmk ] info: send_member_notification: Sending membership update 80 to 2 children
  98. May 07 15:28:11 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
  99. May 07 15:28:11 deb-cluster01 cib: [1097]: notice: ais_dispatch: Membership 80: quorum acquired
  100. May 07 15:28:11 deb-cluster01 cib: [1097]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member (new) addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=80 proc=00000000000000000000000000000002
  101. May 07 15:28:11 deb-cluster01 crmd: [1101]: notice: ais_dispatch: Membership 80: quorum acquired
  102. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: ais_status_callback: status: deb-cluster02 is now member (was lost)
  103. May 07 15:28:11 corosync [pcmk ] info: update_member: 0x181de60 Node 4227989514 (deb-cluster02) born on: 80
  104. May 07 15:28:11 deb-cluster01 cib: [1097]: info: ais_dispatch: Membership 80: quorum retained
  105. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member (new) addr=r(0) ip(10.0.2.252) votes=1 born=72 seen=80 proc=00000000000000000000000000000002
  106. May 07 15:28:11 corosync [pcmk ] info: update_member: Node deb-cluster02 now has process list: 00000000000000000000000000013312 (78610)
  107. May 07 15:28:11 deb-cluster01 cib: [1097]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member addr=r(0) ip(10.0.2.252) votes=1 born=80 seen=80 proc=00000000000000000000000000013312 (new)
  108. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: crm_update_quorum: Updating quorum status to true (call=76)
  109. May 07 15:28:11 corosync [pcmk ] info: send_member_notification: Sending membership update 80 to 2 children
  110. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='deb-cluster02']/lrm (origin=local/crmd/72, version=0.60.2): ok (rc=0)
  111. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='deb-cluster02']/transient_attributes (origin=local/crmd/73, version=0.60.3): ok (rc=0)
  112. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/74, version=0.60.3): ok (rc=0)
  113. May 07 15:28:11 deb-cluster01 cib: [1097]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="60" num_updates="4" />
  114. May 07 15:28:11 deb-cluster01 cib: [1097]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="61" num_updates="1" />
  115. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/76, version=0.61.1): ok (rc=0)
  116. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: crm_ais_dispatch: Setting expected votes to 2
  117. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/78, version=0.61.1): ok (rc=0)
  118. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=failover-ip_monitor_0, magic=0:7;5:2:7:71249a0c-e096-48b0-b7d8-b792b0dde5fb, cib=0.60.2) : Resource op removal
  119. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: erase_xpath_callback: Deletion of "//node_state[@uname='deb-cluster02']/lrm": ok (rc=0)
  120. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=deb-cluster02, magic=NA, cib=0.60.3) : Transient attribute: removal
  121. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: erase_xpath_callback: Deletion of "//node_state[@uname='deb-cluster02']/transient_attributes": ok (rc=0)
  122. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
  123. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: need_abort: Aborting on change to have-quorum
  124. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: ais_dispatch: Membership 80: quorum retained
  125. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: crm_update_peer: Node deb-cluster02: id=4227989514 state=member addr=r(0) ip(10.0.2.252) votes=1 born=80 seen=80 proc=00000000000000000000000000013312 (new)
  126. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/79, version=0.61.1): ok (rc=0)
  127. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: crm_ais_dispatch: Setting expected votes to 2
  128. May 07 15:28:11 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/82, version=0.61.2): ok (rc=0)
  129. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  130. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: do_state_transition: Membership changed: 72 -> 80 - join restart
  131. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: do_pe_invoke: Query 83: Requesting the current CIB: S_POLICY_ENGINE
  132. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
  133. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: update_dc: Unset DC deb-cluster01
  134. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: join_make_offer: Making join offers based on membership 80
  135. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
  136. May 07 15:28:11 deb-cluster01 crmd: [1101]: info: update_dc: Set DC to deb-cluster01 (3.0.1)
  137. May 07 15:28:11 corosync [MAIN ] Completed service synchronization, ready to provide service.
  138. May 07 15:28:11 deb-cluster01 cib: [3605]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-54.raw
  139. May 07 15:28:11 deb-cluster01 cib: [3605]: info: write_cib_contents: Wrote version 0.61.0 of the CIB to disk (digest: a136dbc16249c30c18bd81749dc0b590)
  140. May 07 15:28:11 deb-cluster01 cib: [3605]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.9wmKyA (digest: /var/lib/heartbeat/crm/cib.0QhaSI)
  141. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: update_dc: Unset DC deb-cluster01
  142. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_offer_all: A new node joined the cluster
  143. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
  144. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: update_dc: Set DC to deb-cluster01 (3.0.1)
  145. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  146. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
  147. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_finalize: join-5: Syncing the CIB from deb-cluster01 to the rest of the cluster
  148. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/86, version=0.61.2): ok (rc=0)
  149. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/87, version=0.61.2): ok (rc=0)
  150. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/88, version=0.61.2): ok (rc=0)
  151. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='deb-cluster02']/transient_attributes (origin=deb-cluster02/crmd/6, version=0.61.2): ok (rc=0)
  152. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_ack: join-5: Updating node state to member for deb-cluster02
  153. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='deb-cluster02']/lrm (origin=local/crmd/89, version=0.61.2): ok (rc=0)
  154. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: erase_xpath_callback: Deletion of "//node_state[@uname='deb-cluster02']/lrm": ok (rc=0)
  155. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_ack: join-5: Updating node state to member for deb-cluster01
  156. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='deb-cluster01']/lrm (origin=local/crmd/91, version=0.61.4): ok (rc=0)
  157. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: erase_xpath_callback: Deletion of "//node_state[@uname='deb-cluster01']/lrm": ok (rc=0)
  158. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  159. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
  160. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/93, version=0.61.5): ok (rc=0)
  161. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
  162. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: crm_update_quorum: Updating quorum status to true (call=95)
  163. May 07 15:28:13 deb-cluster01 attrd: [1099]: info: attrd_local_callback: Sending full refresh (origin=crmd)
  164. May 07 15:28:13 deb-cluster01 cib: [1097]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/95, version=0.61.5): ok (rc=0)
  165. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
  166. May 07 15:28:13 deb-cluster01 attrd: [1099]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
  167. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_pe_invoke: Query 96: Requesting the current CIB: S_POLICY_ENGINE
  168. May 07 15:28:13 deb-cluster01 attrd: [1099]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
  169. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_pe_invoke_callback: Invoking the PE: query=96, ref=pe_calc-dc-1304774893-47, seq=80, quorate=1
  170. May 07 15:28:13 deb-cluster01 pengine: [1100]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  171. May 07 15:28:13 deb-cluster01 attrd: [1099]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  172. May 07 15:28:13 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster01 is online
  173. May 07 15:28:13 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster02 is online
  174. May 07 15:28:13 deb-cluster01 pengine: [1100]: notice: native_print: failover-ip (ocf::heartbeat:IPaddr2): Stopped
  175. May 07 15:28:13 deb-cluster01 pengine: [1100]: notice: RecurringOp: Start recurring monitor (1s) for failover-ip on deb-cluster02
  176. May 07 15:28:13 deb-cluster01 pengine: [1100]: notice: LogActions: Start failover-ip (deb-cluster02)
  177. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  178. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: unpack_graph: Unpacked transition 6: 5 actions in 5 synapses
  179. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1304774893-47) derived from /var/lib/pengine/pe-input-42.bz2
  180. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_rsc_command: Initiating action 5: monitor failover-ip_monitor_0 on deb-cluster02
  181. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: match_graph_event: Action failover-ip_monitor_0 (5) confirmed on deb-cluster02 (rc=0)
  182. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on deb-cluster02 - no waiting
  183. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
  184. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_rsc_command: Initiating action 6: start failover-ip_start_0 on deb-cluster02
  185. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: match_graph_event: Action failover-ip_start_0 (6) confirmed on deb-cluster02 (rc=0)
  186. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_rsc_command: Initiating action 7: monitor failover-ip_monitor_1000 on deb-cluster02
  187. May 07 15:28:13 deb-cluster01 pengine: [1100]: info: process_pe_message: Transition 6: PEngine Input stored in: /var/lib/pengine/pe-input-42.bz2
  188. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: match_graph_event: Action failover-ip_monitor_1000 (7) confirmed on deb-cluster02 (rc=0)
  189. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: run_graph: ====================================================
  190. May 07 15:28:13 deb-cluster01 crmd: [1101]: notice: run_graph: Transition 6 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-42.bz2): Complete
  191. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: te_graph_trigger: Transition 6 is now complete
  192. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: notify_crmd: Transition 6 status: done - <null>
  193. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  194. May 07 15:28:13 deb-cluster01 crmd: [1101]: info: do_state_transition: Starting PEngine Recheck Timer
  195. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=deb-cluster02, magic=NA, cib=0.61.9) : Transient attribute: update
  196. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  197. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
  198. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_pe_invoke: Query 97: Requesting the current CIB: S_POLICY_ENGINE
  199. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_pe_invoke_callback: Invoking the PE: query=97, ref=pe_calc-dc-1304774896-52, seq=80, quorate=1
  200. May 07 15:28:16 deb-cluster01 pengine: [1100]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  201. May 07 15:28:16 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster01 is online
  202. May 07 15:28:16 deb-cluster01 pengine: [1100]: info: determine_online_status: Node deb-cluster02 is online
  203. May 07 15:28:16 deb-cluster01 pengine: [1100]: notice: native_print: failover-ip (ocf::heartbeat:IPaddr2): Started deb-cluster02
  204. May 07 15:28:16 deb-cluster01 pengine: [1100]: notice: LogActions: Leave resource failover-ip (Started deb-cluster02)
  205. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  206. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: unpack_graph: Unpacked transition 7: 0 actions in 0 synapses
  207. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1304774896-52) derived from /var/lib/pengine/pe-input-43.bz2
  208. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: run_graph: ====================================================
  209. May 07 15:28:16 deb-cluster01 crmd: [1101]: notice: run_graph: Transition 7 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-43.bz2): Complete
  210. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: te_graph_trigger: Transition 7 is now complete
  211. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: notify_crmd: Transition 7 status: done - <null>
  212. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  213. May 07 15:28:16 deb-cluster01 crmd: [1101]: info: do_state_transition: Starting PEngine Recheck Timer
  214. May 07 15:28:16 deb-cluster01 pengine: [1100]: info: process_pe_message: Transition 7: PEngine Input stored in: /var/lib/pengine/pe-input-43.bz2
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement