Advertisement
amartin

DRBD Won't Promote - DC Log

Apr 10th, 2012
129
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 303.59 KB | None | 0 0
  1. Apr 9 20:21:30 node2 heartbeat: [30097]: info: Heartbeat restart on node node1
  2. Apr 9 20:21:30 node2 heartbeat: [30097]: info: Link node1:br0 up.
  3. Apr 9 20:21:30 node2 heartbeat: [30097]: info: Status update for node node1: status init
  4. Apr 9 20:21:30 node2 heartbeat: [30097]: info: Link node1:br1 up.
  5. Apr 9 20:21:30 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [init]
  6. Apr 9 20:21:30 node2 heartbeat: [30097]: info: Status update for node node1: status up
  7. Apr 9 20:21:30 node2 crmd: [30192]: info: crm_update_peer_proc: node1.ais is now online
  8. Apr 9 20:21:30 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [up]
  9. Apr 9 20:21:31 node2 heartbeat: [30097]: debug: get_delnodelist: delnodelist=
  10. Apr 9 20:21:31 node2 heartbeat: [30097]: info: Status update for node node1: status active
  11. Apr 9 20:21:31 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [active]
  12. Apr 9 20:21:31 node2 cib: [30188]: info: cib_client_status_callback: Status update: Client node1/cib now has status [join]
  13. Apr 9 20:21:32 node2 heartbeat: [30097]: WARN: 1 lost packet(s) for [node1] [11:13]
  14. Apr 9 20:21:32 node2 heartbeat: [30097]: info: No pkts missing from node1!
  15. Apr 9 20:21:32 node2 crmd: [30192]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=true)
  16. Apr 9 20:21:32 node2 crmd: [30192]: info: crm_update_peer_proc: node1.crmd is now online
  17. Apr 9 20:21:32 node2 crmd: [30192]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=true)
  18. Apr 9 20:21:32 node2 crmd: [30192]: info: update_dc: Unset DC node2
  19. Apr 9 20:21:32 node2 crmd: [30192]: info: do_dc_join_offer_all: join-36: Waiting on 2 outstanding join acks
  20. Apr 9 20:21:33 node2 crmd: [30192]: info: update_dc: Set DC to node2 (3.0.5)
  21. Apr 9 20:21:34 node2 heartbeat: [30097]: WARN: 1 lost packet(s) for [node1] [16:18]
  22. Apr 9 20:21:34 node2 heartbeat: [30097]: info: No pkts missing from node1!
  23. Apr 9 20:21:34 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.116.28): ok (rc=0)
  24. Apr 9 20:21:35 node2 ccm: [30187]: debug: quorum plugin: majority
  25. Apr 9 20:21:35 node2 ccm: [30187]: debug: cluster:linux-ha, member_count=3, member_quorum_votes=300
  26. Apr 9 20:21:35 node2 ccm: [30187]: debug: total_node_count=3, total_quorum_votes=300
  27. Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
  28. Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
  29. Apr 9 20:21:35 node2 ccm: [30187]: debug: quorum plugin: majority
  30. Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: no mbr_track info
  31. Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: no mbr_track info
  32. Apr 9 20:21:35 node2 ccm: [30187]: debug: cluster:linux-ha, member_count=3, member_quorum_votes=300
  33. Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  34. Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  35. Apr 9 20:21:35 node2 ccm: [30187]: debug: total_node_count=3, total_quorum_votes=300
  36. Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: instance=19, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6
  37. Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: instance=19, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6
  38. Apr 9 20:21:35 node2 cib: [30188]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=19)
  39. Apr 9 20:21:35 node2 crmd: [30192]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=19)
  40. Apr 9 20:21:35 node2 cib: [30188]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000302
  41. Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: NEW MEMBERSHIP: trans=19, nodes=3, new=1, lost=0 n_idx=0, new_idx=3, old_idx=6
  42. Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=1]
  43. Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=17]
  44. Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=19]
  45. Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=19]
  46. Apr 9 20:21:35 node2 crmd: [30192]: info: ais_status_callback: status: node1 is now member (was lost)
  47. Apr 9 20:21:35 node2 crmd: [30192]: WARN: match_down_event: No match for shutdown action on node1
  48. Apr 9 20:21:35 node2 crmd: [30192]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000203
  49. Apr 9 20:21:35 node2 crmd: [30192]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
  50. Apr 9 20:21:35 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/1324, version=5.116.29): ok (rc=0)
  51. Apr 9 20:21:35 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/1325, version=5.116.30): ok (rc=0)
  52. Apr 9 20:21:36 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
  53. Apr 9 20:21:36 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
  54. Apr 9 20:21:36 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1326, version=5.116.31): ok (rc=0)
  55. Apr 9 20:21:36 node2 crmd: [30192]: info: update_dc: Unset DC node2
  56. Apr 9 20:21:36 node2 crmd: [30192]: info: do_dc_join_offer_all: A new node joined the cluster
  57. Apr 9 20:21:36 node2 crmd: [30192]: info: join_make_offer: Making join offers based on membership 19
  58. Apr 9 20:21:36 node2 crmd: [30192]: info: do_dc_join_offer_all: join-37: Waiting on 3 outstanding join acks
  59. Apr 9 20:21:36 node2 crmd: [30192]: info: update_dc: Set DC to node2 (3.0.5)
  60. Apr 9 20:22:52 node2 crmd: [30192]: ERROR: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
  61. Apr 9 20:22:52 node2 crmd: [30192]: info: crm_timer_popped: Welcomed: 1, Integrated: 2
  62. Apr 9 20:22:52 node2 crmd: [30192]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  63. Apr 9 20:22:52 node2 crmd: [30192]: WARN: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  64. Apr 9 20:22:52 node2 crmd: [30192]: WARN: do_state_transition: 1 cluster nodes failed to respond to the join offer.
  65. Apr 9 20:22:52 node2 crmd: [30192]: info: ghash_print_node: Welcome reply not received from: quorumnode 37
  66. Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_finalize: join-37: Syncing the CIB from node2 to the rest of the cluster
  67. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/1329, version=5.116.32): ok (rc=0)
  68. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1330, version=5.116.33): ok (rc=0)
  69. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1331, version=5.116.34): ok (rc=0)
  70. Apr 9 20:22:52 node2 lrmd: [30189]: info: stonith_api_device_metadata: looking up external/webpowerswitch/heartbeat metadata
  71. Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_ack: join-37: Updating node state to member for node1
  72. Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_ack: join-37: Updating node state to member for node2
  73. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=node1/crmd/6, version=5.116.35): ok (rc=0)
  74. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/1332, version=5.116.36): ok (rc=0)
  75. Apr 9 20:22:52 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
  76. Apr 9 20:22:52 node2 crmd: [30192]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  77. Apr 9 20:22:52 node2 crmd: [30192]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
  78. Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/1334, version=5.116.38): ok (rc=0)
  79. Apr 9 20:22:53 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  80. Apr 9 20:22:53 node2 crmd: [30192]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
  81. Apr 9 20:22:53 node2 crmd: [30192]: info: crm_update_quorum: Updating quorum status to true (call=1338)
  82. Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: do_te_invoke:167 - Triggered transition abort (complete=1) : Peer Cancelled
  83. Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1339: Requesting the current CIB: S_POLICY_ENGINE
  84. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  85. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p_drbd_vmstore:0 (1334001948)
  86. Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:1_last_failure_0, magic=0:8;9:216:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.38) : Resource op removal
  87. Apr 9 20:22:53 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
  88. Apr 9 20:22:53 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1336, version=5.116.40): ok (rc=0)
  89. Apr 9 20:22:53 node2 crmd: [30192]: info: te_update_diff: Detected LRM refresh - 11 resources updated: Skipping all resource events
  90. Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:251 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.116.39) : LRM Refresh
  91. Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1340: Requesting the current CIB: S_POLICY_ENGINE
  92. Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1341: Requesting the current CIB: S_POLICY_ENGINE
  93. Apr 9 20:22:53 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/1338, version=5.116.42): ok (rc=0)
  94. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount2:0 (10000)
  95. Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1341, ref=pe_calc-dc-1334020973-2367, seq=19, quorate=1
  96. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (2000)
  97. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
  98. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  99. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
  100. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  101. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
  102. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  103. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  104. Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  105. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  106. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  107. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  108. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  109. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  110. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  111. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  112. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  113. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  114. Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  115. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  116. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  117. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  118. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  119. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  120. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:1 (10000)
  121. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  122. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  123. Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  124. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  125. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  126. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  127. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  128. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  129. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  130. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  131. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  132. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  133. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  134. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  135. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  136. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  137. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  138. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_libvirt-bin#011(Started node2)
  139. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:1 (10000)
  140. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_fs_vmstore#011(Started node2)
  141. Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_vm#011(Started node2)
  142. Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  143. Apr 9 20:22:53 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  144. Apr 9 20:22:53 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 233: 53 actions in 53 synapses
  145. Apr 9 20:22:53 node2 crmd: [30192]: info: do_te_invoke: Processing graph 233 (ref=pe_calc-dc-1334020973-2367) derived from /var/lib/pengine/pe-input-945.bz2
  146. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 13: monitor p_sysadmin_notify:1_monitor_0 on node1
  147. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
  148. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 14: monitor p_ping:1_monitor_0 on node1
  149. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
  150. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 15: monitor p_drbd_vmstore:0_monitor_0 on node1
  151. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 58 fired and confirmed
  152. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 16: monitor stonithnode1_monitor_0 on node1
  153. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 17: monitor stonithnode2_monitor_0 on node1
  154. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 18: monitor p_drbd_mount1:0_monitor_0 on node1
  155. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 91 fired and confirmed
  156. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 19: monitor p_drbd_mount2:1_monitor_0 on node1
  157. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 121 fired and confirmed
  158. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 20: monitor p_libvirt-bin_monitor_0 on node1
  159. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 21: monitor p_fs_vmstore_monitor_0 on node1
  160. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 22: monitor p_vm_monitor_0 on node1
  161. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 164: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  162. Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=164:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  163. Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[778] (pid 303)
  164. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 175: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  165. Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=175:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  166. Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[779] (pid 304)
  167. Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 185: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  168. Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=185:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  169. Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[780] (pid 305)
  170. Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  171. Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[778] on p_drbd_vmstore:1 for client 30192: pid 303 exited with return code 0
  172. Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 164:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2381
  173. Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2381 from node2
  174. Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (164) confirmed on node2 (rc=0)
  175. Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=778, rc=0, cib-update=0, confirmed=true) ok
  176. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 59 fired and confirmed
  177. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 56 fired and confirmed
  178. Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  179. Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[779] on p_drbd_mount1:1 for client 30192: pid 304 exited with return code 0
  180. Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 175:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2382
  181. Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2382 from node2
  182. Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (175) confirmed on node2 (rc=0)
  183. Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=779, rc=0, cib-update=0, confirmed=true) ok
  184. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 92 fired and confirmed
  185. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 89 fired and confirmed
  186. Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  187. Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[780] on p_drbd_mount2:0 for client 30192: pid 305 exited with return code 0
  188. Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 185:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2383
  189. Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2383 from node2
  190. Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (185) confirmed on node2 (rc=0)
  191. Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=780, rc=0, cib-update=0, confirmed=true) ok
  192. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 122 fired and confirmed
  193. Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 119 fired and confirmed
  194. Apr 9 20:22:53 node2 pengine: [30361]: notice: process_pe_message: Transition 233: PEngine Input stored in: /var/lib/pengine/pe-input-945.bz2
  195. Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action stonithnode2_monitor_0 (17) confirmed on node1 (rc=0)
  196. Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action stonithnode1_monitor_0 (16) confirmed on node1 (rc=0)
  197. Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action p_sysadmin_notify:1_monitor_0 (13) confirmed on node1 (rc=0)
  198. Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action p_ping:1_monitor_0 (14) confirmed on node1 (rc=0)
  199. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (15) confirmed on node1 (rc=0)
  200. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:0_monitor_0 (18) confirmed on node1 (rc=0)
  201. Apr 9 20:22:55 node2 crmd: [30192]: WARN: status_from_rc: Action 20 (p_libvirt-bin_monitor_0) on node1 failed (target: 7 vs. rc: 0): Error
  202. Apr 9 20:22:55 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_libvirt-bin_last_failure_0, magic=0:0;20:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.57) : Event failed
  203. Apr 9 20:22:55 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
  204. Apr 9 20:22:55 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
  205. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_libvirt-bin_monitor_0 (20) confirmed on node1 (rc=4)
  206. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_fs_vmstore_monitor_0 (21) confirmed on node1 (rc=0)
  207. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_vm_monitor_0 (22) confirmed on node1 (rc=0)
  208. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:1_monitor_0 (19) confirmed on node1 (rc=0)
  209. Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on node1 - no waiting
  210. Apr 9 20:22:55 node2 crmd: [30192]: info: run_graph: ====================================================
  211. Apr 9 20:22:55 node2 crmd: [30192]: notice: run_graph: Transition 233 (Complete=25, Pending=0, Fired=0, Skipped=11, Incomplete=17, Source=/var/lib/pengine/pe-input-945.bz2): Stopped
  212. Apr 9 20:22:55 node2 crmd: [30192]: info: te_graph_trigger: Transition 233 is now complete
  213. Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
  214. Apr 9 20:22:55 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  215. Apr 9 20:22:55 node2 crmd: [30192]: info: do_pe_invoke: Query 1342: Requesting the current CIB: S_POLICY_ENGINE
  216. Apr 9 20:22:55 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1342, ref=pe_calc-dc-1334020975-2385, seq=19, quorate=1
  217. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  218. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  219. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
  220. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  221. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  222. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  223. Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  224. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  225. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  226. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  227. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  228. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  229. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  230. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  231. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  232. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  233. Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  234. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  235. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  236. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  237. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  238. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  239. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  240. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  241. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  242. Apr 9 20:22:55 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  243. Apr 9 20:22:55 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  244. Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  245. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  246. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  247. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  248. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  249. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  250. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  251. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  252. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  253. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  254. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  255. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  256. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  257. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  258. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  259. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  260. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  261. Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  262. Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  263. Apr 9 20:22:55 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 234: 58 actions in 58 synapses
  264. Apr 9 20:22:55 node2 crmd: [30192]: info: do_te_invoke: Processing graph 234 (ref=pe_calc-dc-1334020975-2385) derived from /var/lib/pengine/pe-error-8.bz2
  265. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
  266. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
  267. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
  268. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
  269. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
  270. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
  271. Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on node1 - no waiting
  272. Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  273. Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  274. Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[781] (pid 528)
  275. Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  276. Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  277. Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[782] (pid 529)
  278. Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  279. Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  280. Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[783] (pid 530)
  281. Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[781] on p_drbd_vmstore:1 for client 30192: pid 528 exited with return code 0
  282. Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  283. Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  284. Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[782] on p_drbd_mount1:1 for client 30192: pid 529 exited with return code 0
  285. Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[783] on p_drbd_mount2:0 for client 30192: pid 530 exited with return code 0
  286. Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  287. Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2390
  288. Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2390 from node2
  289. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
  290. Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=781, rc=0, cib-update=0, confirmed=true) ok
  291. Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2391
  292. Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2391 from node2
  293. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
  294. Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=782, rc=0, cib-update=0, confirmed=true) ok
  295. Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2392
  296. Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2392 from node2
  297. Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
  298. Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=783, rc=0, cib-update=0, confirmed=true) ok
  299. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
  300. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
  301. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
  302. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
  303. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
  304. Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
  305. Apr 9 20:22:55 node2 crmd: [30192]: notice: run_graph: ====================================================
  306. Apr 9 20:22:55 node2 crmd: [30192]: WARN: run_graph: Transition 234 (Complete=16, Pending=0, Fired=0, Skipped=0, Incomplete=42, Source=/var/lib/pengine/pe-error-8.bz2): Terminated
  307. Apr 9 20:22:55 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
  308. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Graph 234 (58 actions in 58 synapses): batch-limit=30 jobs, network-delay=60000ms
  309. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
  310. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 27]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
  311. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  312. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
  313. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  314. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  315. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 28]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  316. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
  317. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 29]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
  318. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  319. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 28]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  320. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
  321. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
  322. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 35]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
  323. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  324. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
  325. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  326. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  327. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 36]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  328. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
  329. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
  330. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  331. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 36]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  332. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
  333. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
  334. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
  335. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  336. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
  337. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 41]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
  338. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  339. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 51]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  340. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
  341. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  342. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  343. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  344. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
  345. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
  346. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  347. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
  348. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 1000000)
  349. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 51]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  350. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  351. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
  352. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
  353. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
  354. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
  355. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
  356. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  357. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 15 was confirmed (priority: 0)
  358. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
  359. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 17 is pending (priority: 1000000)
  360. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 47]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
  361. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  362. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  363. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 18 was confirmed (priority: 0)
  364. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 19 is pending (priority: 0)
  365. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
  366. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  367. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 1000000)
  368. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
  369. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  370. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 0)
  371. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 74]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
  372. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  373. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  374. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
  375. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  376. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  377. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  378. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 1000000)
  379. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
  380. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  381. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 24 was confirmed (priority: 0)
  382. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 1000000)
  383. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  384. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  385. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
  386. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
  387. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 1000000)
  388. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
  389. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
  390. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  391. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 27 was confirmed (priority: 0)
  392. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 28 was confirmed (priority: 0)
  393. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
  394. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
  395. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  396. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  397. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
  398. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
  399. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
  400. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  401. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
  402. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
  403. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
  404. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  405. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 34 is pending (priority: 0)
  406. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
  407. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  408. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  409. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
  410. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  411. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  412. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  413. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
  414. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  415. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  416. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
  417. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
  418. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 1000000)
  419. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
  420. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
  421. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  422. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 38 was confirmed (priority: 0)
  423. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 39 was confirmed (priority: 0)
  424. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
  425. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
  426. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  427. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  428. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
  429. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 42 is pending (priority: 0)
  430. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  431. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  432. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  433. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  434. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  435. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  436. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 43 was confirmed (priority: 0)
  437. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 0)
  438. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
  439. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  440. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  441. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  442. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  443. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
  444. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  445. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  446. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
  447. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  448. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  449. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  450. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  451. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  452. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
  453. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  454. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  455. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  456. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  457. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
  458. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  459. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  460. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  461. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  462. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
  463. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
  464. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  465. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
  466. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  467. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  468. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  469. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  470. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  471. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
  472. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  473. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  474. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  475. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  476. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
  477. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
  478. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  479. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
  480. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  481. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  482. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  483. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  484. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  485. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
  486. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  487. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
  488. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  489. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
  490. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
  491. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  492. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 56 was confirmed (priority: 1000000)
  493. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
  494. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 9]: Pending (id: all_stopped, type: pseduo, priority: 0)
  495. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  496. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  497. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  498. Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  499. Apr 9 20:22:55 node2 crmd: [30192]: info: te_graph_trigger: Transition 234 is now complete
  500. Apr 9 20:22:55 node2 crmd: [30192]: info: notify_mount2d: Transition 234 status: done - <null>
  501. Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
  502. Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
  503. Apr 9 20:22:55 node2 pengine: [30361]: ERROR: process_pe_message: Transition 234: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-8.bz2
  504. Apr 9 20:23:05 node2 lrmd: [30189]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 1000 ms (> 100 ms) before being called (GSource: 0xc9cf70)
  505. Apr 9 20:23:05 node2 lrmd: [30189]: info: G_SIG_dispatch: started at 4348256773 should have started at 4348256673
  506. Apr 9 20:23:06 node2 cib: [30188]: info: cib_stats: Processed 294 operations (850.00us average, 0% utilization) in the last 10min
  507. Apr 9 20:30:08 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  508. Apr 9 20:30:08 node2 crmd: [30192]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  509. Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  510. Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2393
  511. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/17, version=5.116.64): ok (rc=0)
  512. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1343, version=5.116.65): ok (rc=0)
  513. Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  514. Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2394
  515. Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.65) : Resource op removal
  516. Apr 9 20:30:08 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  517. Apr 9 20:30:08 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  518. Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1346: Requesting the current CIB: S_POLICY_ENGINE
  519. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="116" num_updates="65" >
  520. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <configuration >
  521. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <crm_config >
  522. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
  523. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334017005" id="cib-bootstrap-options-last-lrm-refresh" />
  524. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
  525. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </crm_config>
  526. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </configuration>
  527. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </cib>
  528. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <cib epoch="117" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="cibadmin" cib-last-written="Mon Apr 9 19:19:59 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
  529. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <configuration >
  530. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <crm_config >
  531. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  532. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021408" />
  533. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
  534. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </crm_config>
  535. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </configuration>
  536. Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </cib>
  537. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1345, version=5.117.1): ok (rc=0)
  538. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1347, version=5.117.1): ok (rc=0)
  539. Apr 9 20:30:08 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 3951_mount2_resource (internal) on node1
  540. Apr 9 20:30:08 node2 crmd: [30192]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:1_monitor_10000:684 for deleted resource p_drbd_vmstore:1
  541. Apr 9 20:30:08 node2 crmd: [30192]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  542. Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  543. Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2395
  544. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1348, version=5.117.2): ok (rc=0)
  545. Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.117.1) : Non-status change
  546. Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1350, version=5.117.3): ok (rc=0)
  547. Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1346, ref=pe_calc-dc-1334021408-2396, seq=19, quorate=1
  548. Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_0, magic=0:0;34:195:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.2) : Resource op removal
  549. Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_0, magic=0:0;34:195:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.2) : Resource op removal
  550. Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1351: Requesting the current CIB: S_POLICY_ENGINE
  551. Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1352: Requesting the current CIB: S_POLICY_ENGINE
  552. Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1353: Requesting the current CIB: S_POLICY_ENGINE
  553. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  554. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  555. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
  556. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  557. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  558. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  559. Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  560. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  561. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  562. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  563. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  564. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  565. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  566. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  567. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  568. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  569. Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  570. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  571. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  572. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  573. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  574. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  575. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  576. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  577. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  578. Apr 9 20:30:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  579. Apr 9 20:30:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  580. Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  581. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  582. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  583. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  584. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  585. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  586. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  587. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  588. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  589. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  590. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  591. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  592. Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  593. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  594. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  595. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  596. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  597. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  598. Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1353, ref=pe_calc-dc-1334021409-2397, seq=19, quorate=1
  599. Apr 9 20:30:09 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  600. Apr 9 20:30:09 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
  601. Apr 9 20:30:09 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021408-2396 is obsolete
  602. Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 235: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-9.bz2
  603. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  604. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  605. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  606. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  607. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  608. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  609. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  610. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  611. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  612. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  613. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  614. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  615. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  616. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  617. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  618. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  619. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  620. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  621. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  622. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  623. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  624. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  625. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  626. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  627. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  628. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  629. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  630. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  631. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  632. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  633. Apr 9 20:30:09 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  634. Apr 9 20:30:09 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  635. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  636. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  637. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  638. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  639. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  640. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  641. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  642. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
  643. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  644. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  645. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  646. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
  647. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
  648. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  649. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node1)
  650. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node2)
  651. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_fs_vmstore#011(node2)
  652. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_vm#011(node2)
  653. Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  654. Apr 9 20:30:09 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 236: 74 actions in 74 synapses
  655. Apr 9 20:30:09 node2 crmd: [30192]: info: do_te_invoke: Processing graph 236 (ref=pe_calc-dc-1334021409-2397) derived from /var/lib/pengine/pe-error-10.bz2
  656. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
  657. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
  658. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 11: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
  659. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
  660. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[784] (pid 20250)
  661. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
  662. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 1: cancel p_drbd_mount1:1_monitor_10000 on node2 (local)
  663. Apr 9 20:30:09 node2 lrmd: [30189]: info: cancel_op: operation monitor[765] on p_drbd_mount1:1 for client 30192, its parameters: CRM_meta_clone=[1] drbd_resource=[tools] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.5] CRM_meta_name=[monitor] CRM_meta_role=[Master] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
  664. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_monitor_10000 from 1:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2400
  665. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2400 from node2
  666. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_monitor_10000 (1) confirmed on node2 (rc=0)
  667. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 97 fired and confirmed
  668. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 7: cancel p_drbd_mount2:0_monitor_10000 on node2 (local)
  669. Apr 9 20:30:09 node2 lrmd: [30189]: info: cancel_op: operation monitor[692] on p_drbd_mount2:0 for client 30192, its parameters: CRM_meta_clone=[0] drbd_resource=[crm] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.5] CRM_meta_name=[monitor] CRM_meta_role=[Master] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
  670. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_monitor_10000 from 7:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2402
  671. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2402 from node2
  672. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_monitor_10000 (7) confirmed on node2 (rc=0)
  673. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 127 fired and confirmed
  674. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 137 fired and confirmed
  675. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_monitor_10000 (call=765, status=1, cib-update=0, confirmed=true) Cancelled
  676. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_10000 (call=692, status=1, cib-update=0, confirmed=true) Cancelled
  677. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
  678. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
  679. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 163: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
  680. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=163:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  681. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[785] (pid 20253)
  682. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 170: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
  683. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=170:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  684. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[786] (pid 20254)
  685. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[786] on p_drbd_mount2:0 for client 30192: pid 20254 exited with return code 0
  686. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 170:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2405
  687. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2405 from node2
  688. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (170) confirmed on node2 (rc=0)
  689. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=786, rc=0, cib-update=0, confirmed=true) ok
  690. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 128 fired and confirmed
  691. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[785] on p_drbd_mount1:1 for client 30192: pid 20253 exited with return code 0
  692. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 163:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2406
  693. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2406 from node2
  694. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (163) confirmed on node2 (rc=0)
  695. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=785, rc=0, cib-update=0, confirmed=true) ok
  696. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 98 fired and confirmed
  697. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation monitor[784] on p_drbd_vmstore:1 for client 30192: pid 20250 exited with return code 8
  698. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=784, rc=8, cib-update=1357, confirmed=true) master
  699. Apr 9 20:30:09 node2 crmd: [30192]: WARN: status_from_rc: Action 11 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
  700. Apr 9 20:30:09 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.6) : Event failed
  701. Apr 9 20:30:09 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
  702. Apr 9 20:30:09 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
  703. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (11) confirmed on node2 (rc=4)
  704. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: probe_complete probe_complete on node2 (local) - no waiting
  705. Apr 9 20:30:09 node2 crmd: [30192]: info: run_graph: ====================================================
  706. Apr 9 20:30:09 node2 crmd: [30192]: notice: run_graph: Transition 236 (Complete=16, Pending=0, Fired=0, Skipped=33, Incomplete=25, Source=/var/lib/pengine/pe-error-10.bz2): Stopped
  707. Apr 9 20:30:09 node2 crmd: [30192]: info: te_graph_trigger: Transition 236 is now complete
  708. Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
  709. Apr 9 20:30:09 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  710. Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke: Query 1358: Requesting the current CIB: S_POLICY_ENGINE
  711. Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1358, ref=pe_calc-dc-1334021409-2408, seq=19, quorate=1
  712. Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 236: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-10.bz2
  713. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  714. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  715. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  716. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  717. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  718. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  719. Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  720. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  721. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  722. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  723. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  724. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  725. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  726. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  727. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  728. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  729. Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  730. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  731. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  732. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  733. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  734. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  735. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  736. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  737. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  738. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  739. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  740. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  741. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  742. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  743. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  744. Apr 9 20:30:09 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  745. Apr 9 20:30:09 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  746. Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  747. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  748. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  749. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  750. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  751. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  752. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  753. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  754. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  755. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  756. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  757. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  758. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  759. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  760. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  761. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  762. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  763. Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  764. Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  765. Apr 9 20:30:09 node2 crmd: [30192]: WARN: destroy_action: Cancelling timer for action 1 (src=2331)
  766. Apr 9 20:30:09 node2 crmd: [30192]: WARN: destroy_action: Cancelling timer for action 7 (src=2332)
  767. Apr 9 20:30:09 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 237: 60 actions in 60 synapses
  768. Apr 9 20:30:09 node2 crmd: [30192]: info: do_te_invoke: Processing graph 237 (ref=pe_calc-dc-1334021409-2408) derived from /var/lib/pengine/pe-error-11.bz2
  769. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
  770. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
  771. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
  772. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
  773. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
  774. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
  775. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  776. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  777. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[787] (pid 20381)
  778. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  779. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  780. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[788] (pid 20382)
  781. Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  782. Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  783. Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[789] (pid 20383)
  784. Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  785. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[787] on p_drbd_vmstore:1 for client 30192: pid 20381 exited with return code 0
  786. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2412
  787. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2412 from node2
  788. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
  789. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=787, rc=0, cib-update=0, confirmed=true) ok
  790. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
  791. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
  792. Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  793. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[788] on p_drbd_mount1:1 for client 30192: pid 20382 exited with return code 0
  794. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2413
  795. Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  796. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2413 from node2
  797. Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[789] on p_drbd_mount2:0 for client 30192: pid 20383 exited with return code 0
  798. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
  799. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=788, rc=0, cib-update=0, confirmed=true) ok
  800. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
  801. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 78 fired and confirmed
  802. Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2414
  803. Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2414 from node2
  804. Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
  805. Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=789, rc=0, cib-update=0, confirmed=true) ok
  806. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
  807. Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
  808. Apr 9 20:30:09 node2 crmd: [30192]: notice: run_graph: ====================================================
  809. Apr 9 20:30:09 node2 crmd: [30192]: WARN: run_graph: Transition 237 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-11.bz2): Terminated
  810. Apr 9 20:30:09 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
  811. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Graph 237 (60 actions in 60 synapses): batch-limit=30 jobs, network-delay=60000ms
  812. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
  813. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
  814. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  815. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
  816. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  817. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  818. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  819. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
  820. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
  821. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  822. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  823. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
  824. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
  825. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
  826. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  827. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
  828. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  829. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  830. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  831. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
  832. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
  833. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  834. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  835. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
  836. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
  837. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
  838. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  839. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
  840. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
  841. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  842. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  843. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
  844. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  845. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  846. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  847. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
  848. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
  849. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  850. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
  851. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
  852. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 43]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
  853. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  854. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
  855. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  856. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  857. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
  858. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
  859. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
  860. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
  861. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
  862. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  863. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
  864. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
  865. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
  866. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
  867. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  868. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  869. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
  870. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 0)
  871. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 70]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
  872. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  873. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
  874. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
  875. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  876. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
  877. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
  878. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  879. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  880. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
  881. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  882. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  883. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  884. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 1000000)
  885. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
  886. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  887. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 25 was confirmed (priority: 0)
  888. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 0)
  889. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 77]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
  890. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  891. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 1000000)
  892. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  893. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  894. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
  895. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
  896. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
  897. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
  898. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
  899. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  900. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 29 was confirmed (priority: 0)
  901. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
  902. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
  903. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
  904. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  905. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  906. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
  907. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
  908. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
  909. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  910. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
  911. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
  912. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 106]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
  913. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  914. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
  915. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
  916. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  917. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
  918. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
  919. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  920. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  921. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
  922. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  923. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  924. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  925. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 1000000)
  926. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  927. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  928. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
  929. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
  930. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
  931. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
  932. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
  933. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  934. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
  935. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
  936. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 1000000)
  937. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
  938. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  939. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  940. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 44 was confirmed (priority: 0)
  941. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
  942. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  943. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  944. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  945. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  946. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  947. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  948. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 46 was confirmed (priority: 0)
  949. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
  950. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
  951. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  952. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  953. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  954. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  955. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
  956. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  957. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  958. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
  959. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  960. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  961. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  962. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  963. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  964. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
  965. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  966. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  967. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  968. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  969. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
  970. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  971. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  972. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  973. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  974. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
  975. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
  976. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  977. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
  978. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  979. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  980. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  981. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  982. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  983. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
  984. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  985. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  986. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  987. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  988. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
  989. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
  990. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  991. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
  992. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  993. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  994. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  995. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  996. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  997. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
  998. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  999. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1000. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1001. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
  1002. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
  1003. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1004. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
  1005. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
  1006. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1007. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1008. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1009. Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1010. Apr 9 20:30:09 node2 crmd: [30192]: info: te_graph_trigger: Transition 237 is now complete
  1011. Apr 9 20:30:09 node2 crmd: [30192]: info: notify_mount2d: Transition 237 status: done - <null>
  1012. Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
  1013. Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
  1014. Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 237: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-11.bz2
  1015. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.117.6): ok (rc=0)
  1016. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.117.6): ok (rc=0)
  1017. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/18, version=5.117.7): ok (rc=0)
  1018. Apr 9 20:30:10 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.7) : Resource op removal
  1019. Apr 9 20:30:10 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1020. Apr 9 20:30:10 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1021. Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke: Query 1359: Requesting the current CIB: S_POLICY_ENGINE
  1022. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="117" num_updates="7" >
  1023. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <configuration >
  1024. Apr 9 20:30:10 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.118.1) : Non-status change
  1025. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <crm_config >
  1026. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
  1027. Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke: Query 1360: Requesting the current CIB: S_POLICY_ENGINE
  1028. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021408" id="cib-bootstrap-options-last-lrm-refresh" />
  1029. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
  1030. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </crm_config>
  1031. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </configuration>
  1032. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </cib>
  1033. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <cib epoch="118" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Mon Apr 9 20:30:08 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
  1034. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <configuration >
  1035. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <crm_config >
  1036. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  1037. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021409" />
  1038. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
  1039. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </crm_config>
  1040. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </configuration>
  1041. Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </cib>
  1042. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/20, version=5.118.1): ok (rc=0)
  1043. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=node1/crmd/21, version=5.118.2): ok (rc=0)
  1044. Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/23, version=5.118.3): ok (rc=0)
  1045. Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1360, ref=pe_calc-dc-1334021410-2415, seq=19, quorate=1
  1046. Apr 9 20:30:10 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  1047. Apr 9 20:30:10 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
  1048. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1049. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1050. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1051. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1052. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1053. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1054. Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  1055. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1056. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1057. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1058. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1059. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1060. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1061. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1062. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1063. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1064. Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1065. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1066. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1067. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1068. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1069. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1070. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1071. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1072. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1073. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1074. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1075. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1076. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1077. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1078. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1079. Apr 9 20:30:10 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  1080. Apr 9 20:30:10 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  1081. Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  1082. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  1083. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  1084. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  1085. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  1086. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  1087. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  1088. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  1089. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  1090. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  1091. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  1092. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  1093. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  1094. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  1095. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  1096. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  1097. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  1098. Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  1099. Apr 9 20:30:10 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1100. Apr 9 20:30:10 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 238: 62 actions in 62 synapses
  1101. Apr 9 20:30:10 node2 crmd: [30192]: info: do_te_invoke: Processing graph 238 (ref=pe_calc-dc-1334021410-2415) derived from /var/lib/pengine/pe-error-12.bz2
  1102. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
  1103. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
  1104. Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:0_monitor_0 on node1
  1105. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
  1106. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
  1107. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
  1108. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 143 fired and confirmed
  1109. Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 156: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  1110. Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=156:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  1111. Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[790] (pid 20485)
  1112. Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 167: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  1113. Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=167:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  1114. Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[791] (pid 20486)
  1115. Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 177: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  1116. Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=177:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  1117. Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[792] (pid 20487)
  1118. Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  1119. Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[792] on p_drbd_mount2:0 for client 30192: pid 20487 exited with return code 0
  1120. Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  1121. Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[791] on p_drbd_mount1:1 for client 30192: pid 20486 exited with return code 0
  1122. Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 177:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2420
  1123. Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2420 from node2
  1124. Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (177) confirmed on node2 (rc=0)
  1125. Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=792, rc=0, cib-update=0, confirmed=true) ok
  1126. Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 167:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2421
  1127. Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2421 from node2
  1128. Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (167) confirmed on node2 (rc=0)
  1129. Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=791, rc=0, cib-update=0, confirmed=true) ok
  1130. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
  1131. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
  1132. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
  1133. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 110 fired and confirmed
  1134. Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  1135. Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[790] on p_drbd_vmstore:1 for client 30192: pid 20485 exited with return code 0
  1136. Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 156:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2422
  1137. Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2422 from node2
  1138. Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (156) confirmed on node2 (rc=0)
  1139. Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=790, rc=0, cib-update=0, confirmed=true) ok
  1140. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
  1141. Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
  1142. Apr 9 20:30:10 node2 pengine: [30361]: ERROR: process_pe_message: Transition 238: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-12.bz2
  1143. Apr 9 20:30:11 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (10) confirmed on node1 (rc=0)
  1144. Apr 9 20:30:11 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node1 - no waiting
  1145. Apr 9 20:30:11 node2 crmd: [30192]: notice: run_graph: ====================================================
  1146. Apr 9 20:30:11 node2 crmd: [30192]: WARN: run_graph: Transition 238 (Complete=17, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-12.bz2): Terminated
  1147. Apr 9 20:30:11 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
  1148. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Graph 238 (62 actions in 62 synapses): batch-limit=30 jobs, network-delay=60000ms
  1149. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
  1150. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 25]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
  1151. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1152. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
  1153. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1154. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1155. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  1156. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
  1157. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 27]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
  1158. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1159. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  1160. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
  1161. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
  1162. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 33]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
  1163. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1164. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
  1165. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1166. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1167. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  1168. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
  1169. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 35]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
  1170. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1171. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  1172. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
  1173. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
  1174. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 155]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
  1175. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1176. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
  1177. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 39]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
  1178. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1179. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1180. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
  1181. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1182. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1183. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  1184. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 11 was confirmed (priority: 0)
  1185. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 12 is pending (priority: 1000000)
  1186. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 157]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
  1187. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1188. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 13 was confirmed (priority: 0)
  1189. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 0)
  1190. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 44]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
  1191. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1192. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
  1193. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1194. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1195. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 155]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
  1196. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 157]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
  1197. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 16 is pending (priority: 1000000)
  1198. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
  1199. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
  1200. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1201. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
  1202. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 18 was confirmed (priority: 0)
  1203. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 19 is pending (priority: 1000000)
  1204. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 46]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
  1205. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1206. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  1207. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 20 was confirmed (priority: 0)
  1208. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 0)
  1209. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
  1210. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1211. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 1000000)
  1212. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 166]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
  1213. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1214. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
  1215. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
  1216. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1217. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1218. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 0)
  1219. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1220. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1221. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  1222. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 1000000)
  1223. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 168]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
  1224. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1225. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 26 was confirmed (priority: 0)
  1226. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 0)
  1227. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 78]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
  1228. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1229. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
  1230. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1231. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1232. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 166]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
  1233. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 168]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
  1234. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
  1235. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
  1236. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
  1237. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1238. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
  1239. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 31 was confirmed (priority: 0)
  1240. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 32 is pending (priority: 1000000)
  1241. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
  1242. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1243. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  1244. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 33 was confirmed (priority: 0)
  1245. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 34 is pending (priority: 1000000)
  1246. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
  1247. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1248. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 35 was confirmed (priority: 0)
  1249. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 0)
  1250. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
  1251. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1252. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 1000000)
  1253. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 179]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
  1254. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1255. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
  1256. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 109]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
  1257. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1258. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1259. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 0)
  1260. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1261. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1262. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  1263. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
  1264. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1265. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1266. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
  1267. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 179]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
  1268. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 41 is pending (priority: 1000000)
  1269. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
  1270. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
  1271. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1272. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
  1273. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 43 was confirmed (priority: 0)
  1274. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 1000000)
  1275. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 111]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
  1276. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1277. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  1278. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 45 was confirmed (priority: 0)
  1279. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
  1280. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 144]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  1281. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1282. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1283. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1284. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1285. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1286. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 47 was confirmed (priority: 0)
  1287. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
  1288. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 142]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
  1289. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1290. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1291. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1292. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1293. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
  1294. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1295. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  1296. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
  1297. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1298. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1299. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1300. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1301. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1302. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
  1303. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1304. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1305. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1306. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1307. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
  1308. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1309. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1310. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1311. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1312. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
  1313. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
  1314. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1315. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
  1316. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1317. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1318. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1319. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1320. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1321. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
  1322. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1323. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1324. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1325. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1326. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
  1327. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
  1328. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1329. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
  1330. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1331. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1332. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1333. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1334. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1335. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
  1336. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1337. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1338. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1339. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
  1340. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
  1341. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1342. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 60 was confirmed (priority: 1000000)
  1343. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 61 is pending (priority: 0)
  1344. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
  1345. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1346. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1347. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1348. Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1349. Apr 9 20:30:11 node2 crmd: [30192]: info: te_graph_trigger: Transition 238 is now complete
  1350. Apr 9 20:30:11 node2 crmd: [30192]: info: notify_mount2d: Transition 238 status: done - <null>
  1351. Apr 9 20:30:11 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
  1352. Apr 9 20:30:11 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
  1353. Apr 9 20:31:58 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  1354. Apr 9 20:31:58 node2 crmd: [30192]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  1355. Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  1356. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2424
  1357. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/27, version=5.118.6): ok (rc=0)
  1358. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1362, version=5.118.7): ok (rc=0)
  1359. Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  1360. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2425
  1361. Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.118.7) : Resource op removal
  1362. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1363. Apr 9 20:31:58 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1364. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1365: Requesting the current CIB: S_POLICY_ENGINE
  1365. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="118" num_updates="7" >
  1366. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <configuration >
  1367. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <crm_config >
  1368. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
  1369. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021409" id="cib-bootstrap-options-last-lrm-refresh" />
  1370. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
  1371. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </crm_config>
  1372. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </configuration>
  1373. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </cib>
  1374. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <cib epoch="119" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node1" update-client="crmd" cib-last-written="Mon Apr 9 20:30:10 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
  1375. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <configuration >
  1376. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <crm_config >
  1377. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  1378. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021518" />
  1379. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
  1380. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </crm_config>
  1381. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </configuration>
  1382. Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </cib>
  1383. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1364, version=5.119.1): ok (rc=0)
  1384. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1366, version=5.119.1): ok (rc=0)
  1385. Apr 9 20:31:58 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 4363_mount2_resource (internal) on node1
  1386. Apr 9 20:31:58 node2 crmd: [30192]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  1387. Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  1388. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2426
  1389. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1367, version=5.119.2): ok (rc=0)
  1390. Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.119.1) : Non-status change
  1391. Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1369, version=5.119.3): ok (rc=0)
  1392. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1365, ref=pe_calc-dc-1334021518-2427, seq=19, quorate=1
  1393. Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.2) : Resource op removal
  1394. Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.2) : Resource op removal
  1395. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1370: Requesting the current CIB: S_POLICY_ENGINE
  1396. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1371: Requesting the current CIB: S_POLICY_ENGINE
  1397. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1372: Requesting the current CIB: S_POLICY_ENGINE
  1398. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1399. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1400. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1401. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1402. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1403. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1404. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  1405. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1406. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1407. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1408. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1409. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1410. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1411. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1412. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1413. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1414. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1415. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1416. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1417. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1418. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1419. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1420. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1421. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1422. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1423. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1424. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1425. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1426. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1427. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1428. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1429. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  1430. Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  1431. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  1432. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  1433. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  1434. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  1435. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  1436. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  1437. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  1438. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  1439. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  1440. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  1441. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  1442. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  1443. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  1444. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  1445. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  1446. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  1447. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  1448. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  1449. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1372, ref=pe_calc-dc-1334021518-2428, seq=19, quorate=1
  1450. Apr 9 20:31:58 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  1451. Apr 9 20:31:58 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
  1452. Apr 9 20:31:58 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021518-2427 is obsolete
  1453. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 239: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-13.bz2
  1454. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1455. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1456. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1457. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1458. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1459. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1460. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1461. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1462. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1463. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1464. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1465. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1466. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1467. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1468. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1469. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1470. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1471. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1472. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1473. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  1474. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1475. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  1476. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1477. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  1478. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1479. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  1480. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  1481. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1482. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  1483. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1484. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  1485. Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  1486. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  1487. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  1488. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  1489. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  1490. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  1491. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  1492. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  1493. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
  1494. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  1495. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  1496. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  1497. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
  1498. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
  1499. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  1500. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node1)
  1501. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node2)
  1502. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_fs_vmstore#011(node2)
  1503. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_vm#011(node2)
  1504. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1505. Apr 9 20:31:58 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 240: 72 actions in 72 synapses
  1506. Apr 9 20:31:58 node2 crmd: [30192]: info: do_te_invoke: Processing graph 240 (ref=pe_calc-dc-1334021518-2428) derived from /var/lib/pengine/pe-error-14.bz2
  1507. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
  1508. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
  1509. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
  1510. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
  1511. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[793] (pid 24510)
  1512. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
  1513. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 95 fired and confirmed
  1514. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 125 fired and confirmed
  1515. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 135 fired and confirmed
  1516. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
  1517. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 42 fired and confirmed
  1518. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 161: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
  1519. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=161:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  1520. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[794] (pid 24511)
  1521. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 168: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
  1522. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=168:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  1523. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[795] (pid 24514)
  1524. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[794] on p_drbd_mount1:1 for client 30192: pid 24511 exited with return code 0
  1525. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[795] on p_drbd_mount2:0 for client 30192: pid 24514 exited with return code 0
  1526. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 161:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2432
  1527. Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2432 from node2
  1528. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (161) confirmed on node2 (rc=0)
  1529. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=794, rc=0, cib-update=0, confirmed=true) ok
  1530. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 168:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2433
  1531. Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2433 from node2
  1532. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (168) confirmed on node2 (rc=0)
  1533. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=795, rc=0, cib-update=0, confirmed=true) ok
  1534. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 96 fired and confirmed
  1535. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 126 fired and confirmed
  1536. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation monitor[793] on p_drbd_vmstore:1 for client 30192: pid 24510 exited with return code 8
  1537. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=793, rc=8, cib-update=1374, confirmed=true) master
  1538. Apr 9 20:31:58 node2 crmd: [30192]: WARN: status_from_rc: Action 9 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
  1539. Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.4) : Event failed
  1540. Apr 9 20:31:58 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
  1541. Apr 9 20:31:58 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
  1542. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (9) confirmed on node2 (rc=4)
  1543. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 8: probe_complete probe_complete on node2 (local) - no waiting
  1544. Apr 9 20:31:58 node2 crmd: [30192]: info: run_graph: ====================================================
  1545. Apr 9 20:31:58 node2 crmd: [30192]: notice: run_graph: Transition 240 (Complete=14, Pending=0, Fired=0, Skipped=33, Incomplete=25, Source=/var/lib/pengine/pe-error-14.bz2): Stopped
  1546. Apr 9 20:31:58 node2 crmd: [30192]: info: te_graph_trigger: Transition 240 is now complete
  1547. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
  1548. Apr 9 20:31:58 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1549. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1375: Requesting the current CIB: S_POLICY_ENGINE
  1550. Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1375, ref=pe_calc-dc-1334021518-2435, seq=19, quorate=1
  1551. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 240: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-14.bz2
  1552. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1553. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1554. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1555. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1556. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1557. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1558. Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  1559. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1560. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1561. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1562. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1563. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1564. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1565. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1566. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1567. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1568. Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1569. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1570. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1571. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1572. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1573. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1574. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1575. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1576. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1577. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1578. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1579. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1580. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1581. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1582. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1583. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  1584. Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  1585. Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  1586. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  1587. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  1588. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  1589. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  1590. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  1591. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  1592. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  1593. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  1594. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  1595. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  1596. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  1597. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  1598. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  1599. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  1600. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  1601. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  1602. Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  1603. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1604. Apr 9 20:31:58 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 241: 60 actions in 60 synapses
  1605. Apr 9 20:31:58 node2 crmd: [30192]: info: do_te_invoke: Processing graph 241 (ref=pe_calc-dc-1334021518-2435) derived from /var/lib/pengine/pe-error-15.bz2
  1606. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
  1607. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
  1608. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
  1609. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
  1610. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
  1611. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
  1612. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  1613. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  1614. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[796] (pid 24581)
  1615. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  1616. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  1617. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[797] (pid 24582)
  1618. Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  1619. Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  1620. Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[798] (pid 24583)
  1621. Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  1622. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[796] on p_drbd_vmstore:1 for client 30192: pid 24581 exited with return code 0
  1623. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2439
  1624. Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2439 from node2
  1625. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
  1626. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=796, rc=0, cib-update=0, confirmed=true) ok
  1627. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
  1628. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
  1629. Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  1630. Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  1631. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[797] on p_drbd_mount1:1 for client 30192: pid 24582 exited with return code 0
  1632. Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[798] on p_drbd_mount2:0 for client 30192: pid 24583 exited with return code 0
  1633. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2440
  1634. Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2440 from node2
  1635. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
  1636. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=797, rc=0, cib-update=0, confirmed=true) ok
  1637. Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2441
  1638. Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2441 from node2
  1639. Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
  1640. Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=798, rc=0, cib-update=0, confirmed=true) ok
  1641. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
  1642. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 78 fired and confirmed
  1643. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
  1644. Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
  1645. Apr 9 20:31:58 node2 crmd: [30192]: notice: run_graph: ====================================================
  1646. Apr 9 20:31:58 node2 crmd: [30192]: WARN: run_graph: Transition 241 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-15.bz2): Terminated
  1647. Apr 9 20:31:58 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
  1648. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Graph 241 (60 actions in 60 synapses): batch-limit=30 jobs, network-delay=60000ms
  1649. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
  1650. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
  1651. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1652. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
  1653. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1654. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1655. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  1656. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
  1657. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
  1658. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  1659. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  1660. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
  1661. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
  1662. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
  1663. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1664. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
  1665. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1666. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1667. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  1668. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
  1669. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
  1670. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  1671. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  1672. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
  1673. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
  1674. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
  1675. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1676. Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 241: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-15.bz2
  1677. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
  1678. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
  1679. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1680. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1681. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
  1682. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1683. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1684. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  1685. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
  1686. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
  1687. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1688. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
  1689. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
  1690. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 43]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
  1691. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1692. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
  1693. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1694. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
  1695. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
  1696. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
  1697. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
  1698. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
  1699. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
  1700. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1701. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
  1702. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
  1703. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
  1704. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
  1705. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
  1706. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
  1707. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
  1708. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 0)
  1709. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 70]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
  1710. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1711. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
  1712. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
  1713. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1714. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
  1715. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
  1716. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1717. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1718. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
  1719. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1720. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1721. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  1722. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 1000000)
  1723. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
  1724. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1725. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 25 was confirmed (priority: 0)
  1726. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 0)
  1727. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 77]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
  1728. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1729. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 1000000)
  1730. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1731. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  1732. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
  1733. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
  1734. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
  1735. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
  1736. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
  1737. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1738. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 29 was confirmed (priority: 0)
  1739. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
  1740. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
  1741. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
  1742. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  1743. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  1744. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
  1745. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
  1746. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
  1747. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1748. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
  1749. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
  1750. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 106]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
  1751. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1752. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
  1753. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
  1754. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1755. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
  1756. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
  1757. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1758. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  1759. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
  1760. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1761. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1762. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  1763. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 1000000)
  1764. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  1765. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  1766. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
  1767. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
  1768. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
  1769. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
  1770. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
  1771. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  1772. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
  1773. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
  1774. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 1000000)
  1775. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
  1776. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  1777. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  1778. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 44 was confirmed (priority: 0)
  1779. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
  1780. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  1781. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1782. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1783. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1784. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1785. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1786. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 46 was confirmed (priority: 0)
  1787. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
  1788. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
  1789. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1790. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1791. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1792. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1793. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
  1794. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1795. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  1796. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
  1797. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1798. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1799. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1800. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1801. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1802. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
  1803. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1804. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1805. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1806. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1807. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
  1808. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1809. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1810. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1811. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1812. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
  1813. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
  1814. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1815. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
  1816. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1817. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1818. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  1819. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1820. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1821. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
  1822. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1823. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1824. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1825. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1826. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
  1827. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
  1828. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1829. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
  1830. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1831. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1832. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  1833. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1834. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  1835. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
  1836. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1837. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  1838. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  1839. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
  1840. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
  1841. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  1842. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
  1843. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
  1844. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  1845. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  1846. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  1847. Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  1848. Apr 9 20:31:58 node2 crmd: [30192]: info: te_graph_trigger: Transition 241 is now complete
  1849. Apr 9 20:31:58 node2 crmd: [30192]: info: notify_mount2d: Transition 241 status: done - <null>
  1850. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
  1851. Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
  1852. Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.119.4): ok (rc=0)
  1853. Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/28, version=5.119.5): ok (rc=0)
  1854. Apr 9 20:31:59 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.5) : Resource op removal
  1855. Apr 9 20:31:59 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1856. Apr 9 20:31:59 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1857. Apr 9 20:31:59 node2 crmd: [30192]: info: do_pe_invoke: Query 1376: Requesting the current CIB: S_POLICY_ENGINE
  1858. Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/30, version=5.119.6): ok (rc=0)
  1859. Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=node1/crmd/31, version=5.119.7): ok (rc=0)
  1860. Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/33, version=5.119.8): ok (rc=0)
  1861. Apr 9 20:31:59 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1376, ref=pe_calc-dc-1334021519-2442, seq=19, quorate=1
  1862. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1863. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1864. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1865. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1866. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1867. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1868. Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  1869. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1870. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1871. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1872. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1873. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1874. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1875. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1876. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1877. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1878. Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1879. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1880. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1881. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1882. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1883. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1884. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1885. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1886. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1887. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1888. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1889. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1890. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1891. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  1892. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  1893. Apr 9 20:31:59 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  1894. Apr 9 20:31:59 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  1895. Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  1896. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  1897. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  1898. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  1899. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  1900. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  1901. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  1902. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
  1903. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  1904. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  1905. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  1906. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  1907. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  1908. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  1909. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  1910. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  1911. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  1912. Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  1913. Apr 9 20:31:59 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1914. Apr 9 20:31:59 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 242: 62 actions in 62 synapses
  1915. Apr 9 20:31:59 node2 crmd: [30192]: info: do_te_invoke: Processing graph 242 (ref=pe_calc-dc-1334021519-2442) derived from /var/lib/pengine/pe-error-16.bz2
  1916. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
  1917. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
  1918. Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:0_monitor_0 on node1
  1919. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
  1920. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
  1921. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
  1922. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 143 fired and confirmed
  1923. Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 156: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
  1924. Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=156:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
  1925. Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[799] (pid 24759)
  1926. Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 167: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  1927. Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=167:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  1928. Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[800] (pid 24760)
  1929. Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 177: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  1930. Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=177:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  1931. Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[801] (pid 24761)
  1932. Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[800] on p_drbd_mount1:1 for client 30192: pid 24760 exited with return code 0
  1933. Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  1934. Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 167:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2447
  1935. Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2447 from node2
  1936. Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (167) confirmed on node2 (rc=0)
  1937. Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  1938. Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=800, rc=0, cib-update=0, confirmed=true) ok
  1939. Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[799] on p_drbd_vmstore:1 for client 30192: pid 24759 exited with return code 0
  1940. Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 156:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2448
  1941. Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2448 from node2
  1942. Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (156) confirmed on node2 (rc=0)
  1943. Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=799, rc=0, cib-update=0, confirmed=true) ok
  1944. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
  1945. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
  1946. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
  1947. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
  1948. Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  1949. Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[801] on p_drbd_mount2:0 for client 30192: pid 24761 exited with return code 0
  1950. Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 177:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2449
  1951. Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2449 from node2
  1952. Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (177) confirmed on node2 (rc=0)
  1953. Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=801, rc=0, cib-update=0, confirmed=true) ok
  1954. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
  1955. Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 110 fired and confirmed
  1956. Apr 9 20:31:59 node2 pengine: [30361]: ERROR: process_pe_message: Transition 242: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-16.bz2
  1957. Apr 9 20:32:00 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.119.8): ok (rc=0)
  1958. Apr 9 20:32:00 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5-master-p_drbd_vmstore.0, name=master-p_drbd_vmstore:0, value=10000, magic=NA, cib=5.119.10) : Transient attribute: update
  1959. Apr 9 20:32:00 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
  1960. Apr 9 20:32:00 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
  1961. Apr 9 20:32:00 node2 crmd: [30192]: WARN: status_from_rc: Action 10 (p_drbd_vmstore:0_monitor_0) on node1 failed (target: 7 vs. rc: 0): Error
  1962. Apr 9 20:32:00 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_failure_0, magic=0:0;10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.11) : Event failed
  1963. Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (10) confirmed on node1 (rc=4)
  1964. Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node1 - no waiting
  1965. Apr 9 20:32:00 node2 crmd: [30192]: info: run_graph: ====================================================
  1966. Apr 9 20:32:00 node2 crmd: [30192]: notice: run_graph: Transition 242 (Complete=17, Pending=0, Fired=0, Skipped=28, Incomplete=17, Source=/var/lib/pengine/pe-error-16.bz2): Stopped
  1967. Apr 9 20:32:00 node2 crmd: [30192]: info: te_graph_trigger: Transition 242 is now complete
  1968. Apr 9 20:32:00 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
  1969. Apr 9 20:32:00 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1970. Apr 9 20:32:00 node2 crmd: [30192]: info: do_pe_invoke: Query 1377: Requesting the current CIB: S_POLICY_ENGINE
  1971. Apr 9 20:32:00 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1377, ref=pe_calc-dc-1334021520-2451, seq=19, quorate=1
  1972. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  1973. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
  1974. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  1975. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  1976. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  1977. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  1978. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  1979. Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  1980. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  1981. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  1982. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  1983. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  1984. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  1985. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  1986. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  1987. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  1988. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  1989. Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  1990. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  1991. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  1992. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1993. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1994. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
  1995. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
  1996. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1997. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  1998. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  1999. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  2000. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  2001. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  2002. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  2003. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  2004. Apr 9 20:32:00 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  2005. Apr 9 20:32:00 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  2006. Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  2007. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  2008. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  2009. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  2010. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  2011. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  2012. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  2013. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:0#011(Slave node1)
  2014. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  2015. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  2016. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  2017. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  2018. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  2019. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  2020. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  2021. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  2022. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  2023. Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  2024. Apr 9 20:32:00 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2025. Apr 9 20:32:00 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 243: 50 actions in 50 synapses
  2026. Apr 9 20:32:00 node2 crmd: [30192]: info: do_te_invoke: Processing graph 243 (ref=pe_calc-dc-1334021520-2451) derived from /var/lib/pengine/pe-error-17.bz2
  2027. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
  2028. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
  2029. Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 40: monitor p_drbd_vmstore:0_monitor_20000 on node1
  2030. Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 45: monitor p_drbd_vmstore:1_monitor_10000 on node2 (local)
  2031. Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=45:243:8:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_10000 )
  2032. Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 monitor[802] (pid 24866)
  2033. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
  2034. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
  2035. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 144 fired and confirmed
  2036. Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 172: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
  2037. Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=172:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  2038. Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[803] (pid 24867)
  2039. Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 182: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
  2040. Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=182:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  2041. Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[804] (pid 24868)
  2042. Apr 9 20:32:00 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
  2043. Apr 9 20:32:00 node2 lrmd: [30189]: info: operation notify[803] on p_drbd_mount1:1 for client 30192: pid 24867 exited with return code 0
  2044. Apr 9 20:32:00 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
  2045. Apr 9 20:32:00 node2 lrmd: [30189]: info: operation notify[804] on p_drbd_mount2:0 for client 30192: pid 24868 exited with return code 0
  2046. Apr 9 20:32:00 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 172:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021520-2456
  2047. Apr 9 20:32:00 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021520-2456 from node2
  2048. Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (172) confirmed on node2 (rc=0)
  2049. Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=803, rc=0, cib-update=0, confirmed=true) ok
  2050. Apr 9 20:32:00 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 182:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021520-2457
  2051. Apr 9 20:32:00 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021520-2457 from node2
  2052. Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (182) confirmed on node2 (rc=0)
  2053. Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=804, rc=0, cib-update=0, confirmed=true) ok
  2054. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 83 fired and confirmed
  2055. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
  2056. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 114 fired and confirmed
  2057. Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
  2058. Apr 9 20:32:00 node2 lrmd: [30189]: info: operation monitor[802] on p_drbd_vmstore:1 for client 30192: pid 24866 exited with return code 8
  2059. Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_10000 (call=802, rc=8, cib-update=1378, confirmed=false) master
  2060. Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_10000 (45) confirmed on node2 (rc=0)
  2061. Apr 9 20:32:00 node2 pengine: [30361]: ERROR: process_pe_message: Transition 243: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-17.bz2
  2062. Apr 9 20:32:02 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_20000 (40) confirmed on node1 (rc=0)
  2063. Apr 9 20:32:02 node2 crmd: [30192]: notice: run_graph: ====================================================
  2064. Apr 9 20:32:02 node2 crmd: [30192]: WARN: run_graph: Transition 243 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=37, Source=/var/lib/pengine/pe-error-17.bz2): Terminated
  2065. Apr 9 20:32:02 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
  2066. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Graph 243 (50 actions in 50 synapses): batch-limit=30 jobs, network-delay=60000ms
  2067. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
  2068. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
  2069. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  2070. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
  2071. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  2072. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2073. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  2074. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
  2075. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
  2076. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
  2077. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
  2078. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
  2079. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
  2080. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
  2081. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  2082. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
  2083. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  2084. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2085. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  2086. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
  2087. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
  2088. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
  2089. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
  2090. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
  2091. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 8 was confirmed (priority: 0)
  2092. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 9 was confirmed (priority: 0)
  2093. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
  2094. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
  2095. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2096. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
  2097. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 171]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
  2098. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  2099. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 12 is pending (priority: 0)
  2100. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 74]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
  2101. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  2102. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  2103. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
  2104. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  2105. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2106. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  2107. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
  2108. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 173]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
  2109. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  2110. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 15 was confirmed (priority: 0)
  2111. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 16 is pending (priority: 0)
  2112. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
  2113. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  2114. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 17 is pending (priority: 1000000)
  2115. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  2116. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
  2117. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 171]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
  2118. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 173]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
  2119. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
  2120. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
  2121. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
  2122. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  2123. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
  2124. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 20 was confirmed (priority: 0)
  2125. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
  2126. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 81]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
  2127. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
  2128. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
  2129. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 22 was confirmed (priority: 0)
  2130. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 1000000)
  2131. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 183]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
  2132. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  2133. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 24 was confirmed (priority: 0)
  2134. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 0)
  2135. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
  2136. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  2137. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 1000000)
  2138. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 184]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
  2139. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  2140. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 0)
  2141. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
  2142. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  2143. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
  2144. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 0)
  2145. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  2146. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2147. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  2148. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
  2149. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
  2150. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
  2151. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 183]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
  2152. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 184]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
  2153. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 30 is pending (priority: 1000000)
  2154. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
  2155. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
  2156. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
  2157. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 31 was confirmed (priority: 0)
  2158. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
  2159. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
  2160. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 112]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
  2161. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
  2162. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
  2163. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
  2164. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
  2165. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 145]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  2166. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  2167. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  2168. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2169. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  2170. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  2171. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 36 was confirmed (priority: 0)
  2172. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
  2173. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
  2174. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  2175. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  2176. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  2177. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  2178. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
  2179. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  2180. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 145]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
  2181. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 0)
  2182. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  2183. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2184. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  2185. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  2186. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  2187. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 0)
  2188. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  2189. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2190. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2191. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  2192. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 41 is pending (priority: 0)
  2193. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  2194. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2195. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2196. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  2197. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 42 is pending (priority: 0)
  2198. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
  2199. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  2200. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 0)
  2201. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  2202. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2203. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
  2204. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2205. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  2206. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 0)
  2207. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2208. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2209. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  2210. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  2211. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
  2212. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
  2213. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  2214. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
  2215. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  2216. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2217. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
  2218. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  2219. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
  2220. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
  2221. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  2222. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
  2223. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
  2224. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
  2225. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
  2226. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
  2227. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
  2228. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
  2229. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
  2230. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
  2231. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
  2232. Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
  2233. Apr 9 20:32:02 node2 crmd: [30192]: info: te_graph_trigger: Transition 243 is now complete
  2234. Apr 9 20:32:02 node2 crmd: [30192]: info: notify_mount2d: Transition 243 status: done - <null>
  2235. Apr 9 20:32:02 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
  2236. Apr 9 20:32:02 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
  2237. Apr 9 20:32:08 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  2238. Apr 9 20:32:08 node2 crmd: [30192]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  2239. Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  2240. Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2458
  2241. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/37, version=5.119.15): ok (rc=0)
  2242. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1379, version=5.119.16): ok (rc=0)
  2243. Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  2244. Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2459
  2245. Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_failure_0, magic=0:0;10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.16) : Resource op removal
  2246. Apr 9 20:32:08 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2247. Apr 9 20:32:08 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  2248. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1382: Requesting the current CIB: S_POLICY_ENGINE
  2249. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="119" num_updates="16" >
  2250. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <configuration >
  2251. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <crm_config >
  2252. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
  2253. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021518" id="cib-bootstrap-options-last-lrm-refresh" />
  2254. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
  2255. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </crm_config>
  2256. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </configuration>
  2257. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </cib>
  2258. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <cib epoch="120" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Mon Apr 9 20:31:58 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
  2259. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <configuration >
  2260. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <crm_config >
  2261. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
  2262. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021528" />
  2263. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
  2264. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </crm_config>
  2265. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </configuration>
  2266. Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </cib>
  2267. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1381, version=5.120.1): ok (rc=0)
  2268. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1383, version=5.120.1): ok (rc=0)
  2269. Apr 9 20:32:08 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 4463_mount2_resource (internal) on node1
  2270. Apr 9 20:32:08 node2 crmd: [30192]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:1_monitor_10000:802 for deleted resource p_drbd_vmstore:1
  2271. Apr 9 20:32:08 node2 crmd: [30192]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  2272. Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  2273. Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2460
  2274. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1384, version=5.120.2): ok (rc=0)
  2275. Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.120.1) : Non-status change
  2276. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1382, ref=pe_calc-dc-1334021528-2461, seq=19, quorate=1
  2277. Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.2) : Resource op removal
  2278. Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.2) : Resource op removal
  2279. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1387: Requesting the current CIB: S_POLICY_ENGINE
  2280. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1388: Requesting the current CIB: S_POLICY_ENGINE
  2281. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1389: Requesting the current CIB: S_POLICY_ENGINE
  2282. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  2283. Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1386, version=5.120.3): ok (rc=0)
  2284. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
  2285. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  2286. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  2287. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  2288. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  2289. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  2290. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
  2291. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  2292. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  2293. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  2294. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  2295. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  2296. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  2297. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  2298. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  2299. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  2300. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  2301. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  2302. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  2303. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  2304. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  2305. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
  2306. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
  2307. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  2308. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  2309. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
  2310. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
  2311. Apr 9 20:32:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  2312. Apr 9 20:32:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  2313. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
  2314. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  2315. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  2316. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  2317. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  2318. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  2319. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  2320. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:0#011(Slave node1)
  2321. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
  2322. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  2323. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  2324. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  2325. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
  2326. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
  2327. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  2328. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
  2329. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
  2330. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
  2331. Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1389, ref=pe_calc-dc-1334021528-2462, seq=19, quorate=1
  2332. Apr 9 20:32:08 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  2333. Apr 9 20:32:08 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
  2334. Apr 9 20:32:08 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021528-2461 is obsolete
  2335. Apr 9 20:32:08 node2 pengine: [30361]: ERROR: process_pe_message: Transition 244: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-18.bz2
  2336. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
  2337. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
  2338. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
  2339. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
  2340. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
  2341. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
  2342. Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
  2343. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
  2344. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
  2345. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
  2346. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
  2347. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
  2348. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
  2349. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
  2350. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
  2351. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
  2352. Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
  2353. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
  2354. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
  2355. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:0 on node1
  2356. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  2357. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:0 on node1
  2358. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
  2359. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:0 on node1
  2360. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  2361. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:0 on node1
  2362. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
  2363. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  2364. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:1 on node1
  2365. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
  2366. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:1 on node1
  2367. Apr 9 20:32:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
  2368. Apr 9 20:32:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
  2369. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node1
  2370. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_fs_vmstore on node1
  2371. Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_vm on node1
  2372. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
  2373. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
  2374. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
  2375. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
  2376. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
  2377. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
  2378. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_vmstore:0#011(Slave -> Master node1)
  2379. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
  2380. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
  2381. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
  2382. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
  2383. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_mount1:0#011(Stopped -> Master node1)
  2384. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
  2385. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
  2386. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
  2387. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_mount2:1#011(Stopped -> Master node1)
  2388. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_libvirt-bin#011(Started node1)
  2389. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_fs_vmstore#011(Started node2 -> node1)
  2390. Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_vm#011(Started node2 -> node1)
  2391. Apr 9 20:32:08 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2392. Apr 9 20:32:08 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 245: 114 actions in 114 synapses
  2393. Apr 9 20:32:08 node2 crmd: [30192]: info: do_te_invoke: Processing graph 245 (ref=pe_calc-dc-1334021528-2462) derived from /var/lib/pengine/pe-error-19.bz2
  2394. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 27 fired and confirmed
  2395. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
  2396. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 1: cancel p_drbd_vmstore:0_monitor_20000 on node1
  2397. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
  2398. Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=10:245:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
  2399. Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[805] (pid 25288)
  2400. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
  2401. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 100 fired and confirmed
  2402. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 131 fired and confirmed
  2403. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 147 fired and confirmed
  2404. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 159: notify p_drbd_vmstore:0_pre_notify_start_0 on node1
  2405. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 179: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
  2406. Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=179:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
  2407. Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[806] (pid 25289)
  2408. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 190: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
  2409. Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=190:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
  2410. Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[807] (pid 25290)
  2411. Apr 9 20:32:08 node2 lrmd: [30189]: info: operation notify[806] on p_drbd_mount1:1 for client 30192: pid 25289 exited with return code 0
  2412. Apr 9 20:32:08 node2 lrmd: [30189]: info: operation notify[807] on p_drbd_mount2:0 for client 30192: pid 25290 exited with return code 0
  2413. Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 179:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-2468
  2414. Apr 9 20:32:08 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-2468 from node2
  2415. Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (179) confirmed on node2 (rc=0)
  2416. Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=806, rc=0, cib-update=0, confirmed=true) ok
  2417. Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 190:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-2469
  2418. Apr 9 20:32:08 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-2469 from node2
  2419. Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (190) confirmed on node2 (rc=0)
  2420. Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=807, rc=0, cib-update=0, confirmed=true) ok
  2421. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 101 fired and confirmed
  2422. Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 132 fired and confirmed
  2423. Apr 9 20:32:08 node2 pengine: [30361]: ERROR: process_pe_message: Transition 245: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-19.bz2
  2424. Apr 9 20:32:08 node2 lrmd: [30189]: info: operation monitor[805] on p_drbd_vmstore:1 for client 30192: pid 25288 exited with return code 8
  2425. Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=805, rc=8, cib-update=1391, confirmed=true) master
  2426. Apr 9 20:32:08 node2 crmd: [30192]: WARN: status_from_rc: Action 10 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
  2427. Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;10:245:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.4) : Event failed
  2428. Apr 9 20:32:08 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
  2429. Apr 9 20:32:08 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
  2430. Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (10) confirmed on node2 (rc=4)
  2431. Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node2 (local) - no waiting
  2432. Apr 9 20:32:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-16 from node1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement