Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Apr 9 20:21:30 node2 heartbeat: [30097]: info: Heartbeat restart on node node1
- Apr 9 20:21:30 node2 heartbeat: [30097]: info: Link node1:br0 up.
- Apr 9 20:21:30 node2 heartbeat: [30097]: info: Status update for node node1: status init
- Apr 9 20:21:30 node2 heartbeat: [30097]: info: Link node1:br1 up.
- Apr 9 20:21:30 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [init]
- Apr 9 20:21:30 node2 heartbeat: [30097]: info: Status update for node node1: status up
- Apr 9 20:21:30 node2 crmd: [30192]: info: crm_update_peer_proc: node1.ais is now online
- Apr 9 20:21:30 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [up]
- Apr 9 20:21:31 node2 heartbeat: [30097]: debug: get_delnodelist: delnodelist=
- Apr 9 20:21:31 node2 heartbeat: [30097]: info: Status update for node node1: status active
- Apr 9 20:21:31 node2 crmd: [30192]: notice: crmd_ha_status_callback: Status update: Node node1 now has status [active]
- Apr 9 20:21:31 node2 cib: [30188]: info: cib_client_status_callback: Status update: Client node1/cib now has status [join]
- Apr 9 20:21:32 node2 heartbeat: [30097]: WARN: 1 lost packet(s) for [node1] [11:13]
- Apr 9 20:21:32 node2 heartbeat: [30097]: info: No pkts missing from node1!
- Apr 9 20:21:32 node2 crmd: [30192]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=true)
- Apr 9 20:21:32 node2 crmd: [30192]: info: crm_update_peer_proc: node1.crmd is now online
- Apr 9 20:21:32 node2 crmd: [30192]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=true)
- Apr 9 20:21:32 node2 crmd: [30192]: info: update_dc: Unset DC node2
- Apr 9 20:21:32 node2 crmd: [30192]: info: do_dc_join_offer_all: join-36: Waiting on 2 outstanding join acks
- Apr 9 20:21:33 node2 crmd: [30192]: info: update_dc: Set DC to node2 (3.0.5)
- Apr 9 20:21:34 node2 heartbeat: [30097]: WARN: 1 lost packet(s) for [node1] [16:18]
- Apr 9 20:21:34 node2 heartbeat: [30097]: info: No pkts missing from node1!
- Apr 9 20:21:34 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.116.28): ok (rc=0)
- Apr 9 20:21:35 node2 ccm: [30187]: debug: quorum plugin: majority
- Apr 9 20:21:35 node2 ccm: [30187]: debug: cluster:linux-ha, member_count=3, member_quorum_votes=300
- Apr 9 20:21:35 node2 ccm: [30187]: debug: total_node_count=3, total_quorum_votes=300
- Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
- Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
- Apr 9 20:21:35 node2 ccm: [30187]: debug: quorum plugin: majority
- Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: no mbr_track info
- Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: no mbr_track info
- Apr 9 20:21:35 node2 ccm: [30187]: debug: cluster:linux-ha, member_count=3, member_quorum_votes=300
- Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- Apr 9 20:21:35 node2 ccm: [30187]: debug: total_node_count=3, total_quorum_votes=300
- Apr 9 20:21:35 node2 cib: [30188]: info: mem_handle_event: instance=19, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6
- Apr 9 20:21:35 node2 crmd: [30192]: info: mem_handle_event: instance=19, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6
- Apr 9 20:21:35 node2 cib: [30188]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=19)
- Apr 9 20:21:35 node2 crmd: [30192]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=19)
- Apr 9 20:21:35 node2 cib: [30188]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000302
- Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: NEW MEMBERSHIP: trans=19, nodes=3, new=1, lost=0 n_idx=0, new_idx=3, old_idx=6
- Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=1]
- Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=17]
- Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=19]
- Apr 9 20:21:35 node2 crmd: [30192]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=19]
- Apr 9 20:21:35 node2 crmd: [30192]: info: ais_status_callback: status: node1 is now member (was lost)
- Apr 9 20:21:35 node2 crmd: [30192]: WARN: match_down_event: No match for shutdown action on node1
- Apr 9 20:21:35 node2 crmd: [30192]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000203
- Apr 9 20:21:35 node2 crmd: [30192]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
- Apr 9 20:21:35 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/1324, version=5.116.29): ok (rc=0)
- Apr 9 20:21:35 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/1325, version=5.116.30): ok (rc=0)
- Apr 9 20:21:36 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
- Apr 9 20:21:36 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
- Apr 9 20:21:36 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1326, version=5.116.31): ok (rc=0)
- Apr 9 20:21:36 node2 crmd: [30192]: info: update_dc: Unset DC node2
- Apr 9 20:21:36 node2 crmd: [30192]: info: do_dc_join_offer_all: A new node joined the cluster
- Apr 9 20:21:36 node2 crmd: [30192]: info: join_make_offer: Making join offers based on membership 19
- Apr 9 20:21:36 node2 crmd: [30192]: info: do_dc_join_offer_all: join-37: Waiting on 3 outstanding join acks
- Apr 9 20:21:36 node2 crmd: [30192]: info: update_dc: Set DC to node2 (3.0.5)
- Apr 9 20:22:52 node2 crmd: [30192]: ERROR: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
- Apr 9 20:22:52 node2 crmd: [30192]: info: crm_timer_popped: Welcomed: 1, Integrated: 2
- Apr 9 20:22:52 node2 crmd: [30192]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Apr 9 20:22:52 node2 crmd: [30192]: WARN: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
- Apr 9 20:22:52 node2 crmd: [30192]: WARN: do_state_transition: 1 cluster nodes failed to respond to the join offer.
- Apr 9 20:22:52 node2 crmd: [30192]: info: ghash_print_node: Welcome reply not received from: quorumnode 37
- Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_finalize: join-37: Syncing the CIB from node2 to the rest of the cluster
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/1329, version=5.116.32): ok (rc=0)
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1330, version=5.116.33): ok (rc=0)
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1331, version=5.116.34): ok (rc=0)
- Apr 9 20:22:52 node2 lrmd: [30189]: info: stonith_api_device_metadata: looking up external/webpowerswitch/heartbeat metadata
- Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_ack: join-37: Updating node state to member for node1
- Apr 9 20:22:52 node2 crmd: [30192]: info: do_dc_join_ack: join-37: Updating node state to member for node2
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=node1/crmd/6, version=5.116.35): ok (rc=0)
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/1332, version=5.116.36): ok (rc=0)
- Apr 9 20:22:52 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
- Apr 9 20:22:52 node2 crmd: [30192]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Apr 9 20:22:52 node2 crmd: [30192]: info: populate_cib_nodes_ha: Requesting the list of configured nodes
- Apr 9 20:22:52 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/1334, version=5.116.38): ok (rc=0)
- Apr 9 20:22:53 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
- Apr 9 20:22:53 node2 crmd: [30192]: info: crm_update_quorum: Updating quorum status to true (call=1338)
- Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: do_te_invoke:167 - Triggered transition abort (complete=1) : Peer Cancelled
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1339: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p_drbd_vmstore:0 (1334001948)
- Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_mount1:1_last_failure_0, magic=0:8;9:216:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.38) : Resource op removal
- Apr 9 20:22:53 node2 crmd: [30192]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
- Apr 9 20:22:53 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/1336, version=5.116.40): ok (rc=0)
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_update_diff: Detected LRM refresh - 11 resources updated: Skipping all resource events
- Apr 9 20:22:53 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:251 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.116.39) : LRM Refresh
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1340: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke: Query 1341: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:22:53 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/1338, version=5.116.42): ok (rc=0)
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount2:0 (10000)
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1341, ref=pe_calc-dc-1334020973-2367, seq=19, quorate=1
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (2000)
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:22:53 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:1 (10000)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_libvirt-bin#011(Started node2)
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:1 (10000)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_fs_vmstore#011(Started node2)
- Apr 9 20:22:53 node2 pengine: [30361]: notice: LogActions: Leave p_vm#011(Started node2)
- Apr 9 20:22:53 node2 attrd: [30191]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:22:53 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 233: 53 actions in 53 synapses
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_te_invoke: Processing graph 233 (ref=pe_calc-dc-1334020973-2367) derived from /var/lib/pengine/pe-input-945.bz2
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 13: monitor p_sysadmin_notify:1_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 14: monitor p_ping:1_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 15: monitor p_drbd_vmstore:0_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 58 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 16: monitor stonithnode1_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 17: monitor stonithnode2_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 18: monitor p_drbd_mount1:0_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 91 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 19: monitor p_drbd_mount2:1_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 121 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 20: monitor p_libvirt-bin_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 21: monitor p_fs_vmstore_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 22: monitor p_vm_monitor_0 on node1
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 164: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=164:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[778] (pid 303)
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 175: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=175:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[779] (pid 304)
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_rsc_command: Initiating action 185: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:53 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=185:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:22:53 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[780] (pid 305)
- Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[778] on p_drbd_vmstore:1 for client 30192: pid 303 exited with return code 0
- Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 164:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2381
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2381 from node2
- Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (164) confirmed on node2 (rc=0)
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=778, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 59 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 56 fired and confirmed
- Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[779] on p_drbd_mount1:1 for client 30192: pid 304 exited with return code 0
- Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 175:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2382
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2382 from node2
- Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (175) confirmed on node2 (rc=0)
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=779, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 92 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 89 fired and confirmed
- Apr 9 20:22:53 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:53 node2 lrmd: [30189]: info: operation notify[780] on p_drbd_mount2:0 for client 30192: pid 305 exited with return code 0
- Apr 9 20:22:53 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 185:233:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020973-2383
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020973-2383 from node2
- Apr 9 20:22:53 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (185) confirmed on node2 (rc=0)
- Apr 9 20:22:53 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=780, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 122 fired and confirmed
- Apr 9 20:22:53 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 119 fired and confirmed
- Apr 9 20:22:53 node2 pengine: [30361]: notice: process_pe_message: Transition 233: PEngine Input stored in: /var/lib/pengine/pe-input-945.bz2
- Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action stonithnode2_monitor_0 (17) confirmed on node1 (rc=0)
- Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action stonithnode1_monitor_0 (16) confirmed on node1 (rc=0)
- Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action p_sysadmin_notify:1_monitor_0 (13) confirmed on node1 (rc=0)
- Apr 9 20:22:54 node2 crmd: [30192]: info: match_graph_event: Action p_ping:1_monitor_0 (14) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (15) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:0_monitor_0 (18) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: status_from_rc: Action 20 (p_libvirt-bin_monitor_0) on node1 failed (target: 7 vs. rc: 0): Error
- Apr 9 20:22:55 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_libvirt-bin_last_failure_0, magic=0:0;20:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.57) : Event failed
- Apr 9 20:22:55 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
- Apr 9 20:22:55 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_libvirt-bin_monitor_0 (20) confirmed on node1 (rc=4)
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_fs_vmstore_monitor_0 (21) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_vm_monitor_0 (22) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:1_monitor_0 (19) confirmed on node1 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on node1 - no waiting
- Apr 9 20:22:55 node2 crmd: [30192]: info: run_graph: ====================================================
- Apr 9 20:22:55 node2 crmd: [30192]: notice: run_graph: Transition 233 (Complete=25, Pending=0, Fired=0, Skipped=11, Incomplete=17, Source=/var/lib/pengine/pe-input-945.bz2): Stopped
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_graph_trigger: Transition 233 is now complete
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_pe_invoke: Query 1342: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1342, ref=pe_calc-dc-1334020975-2385, seq=19, quorate=1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:22:55 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:22:55 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:22:55 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:22:55 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:22:55 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 234: 58 actions in 58 synapses
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_te_invoke: Processing graph 234 (ref=pe_calc-dc-1334020975-2385) derived from /var/lib/pengine/pe-error-8.bz2
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 12: probe_complete probe_complete on node1 - no waiting
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[781] (pid 528)
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[782] (pid 529)
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:22:55 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[783] (pid 530)
- Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[781] on p_drbd_vmstore:1 for client 30192: pid 528 exited with return code 0
- Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[782] on p_drbd_mount1:1 for client 30192: pid 529 exited with return code 0
- Apr 9 20:22:55 node2 lrmd: [30189]: info: operation notify[783] on p_drbd_mount2:0 for client 30192: pid 530 exited with return code 0
- Apr 9 20:22:55 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2390
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2390 from node2
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=781, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2391
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2391 from node2
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=782, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:55 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:234:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334020975-2392
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334020975-2392 from node2
- Apr 9 20:22:55 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=783, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
- Apr 9 20:22:55 node2 crmd: [30192]: notice: run_graph: ====================================================
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: run_graph: Transition 234 (Complete=16, Pending=0, Fired=0, Skipped=0, Incomplete=42, Source=/var/lib/pengine/pe-error-8.bz2): Terminated
- Apr 9 20:22:55 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Graph 234 (58 actions in 58 synapses): batch-limit=30 jobs, network-delay=60000ms
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 27]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 28]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 29]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 28]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 35]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 36]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 36]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 41]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 51]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 51]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 50]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 15 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 17 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 47]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 40]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 18 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 19 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 74]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 24 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 27 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 28 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 34 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 38 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 39 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 42 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 43 was confirmed (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 10]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 56 was confirmed (priority: 1000000)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: [Action 9]: Pending (id: all_stopped, type: pseduo, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:22:55 node2 crmd: [30192]: info: te_graph_trigger: Transition 234 is now complete
- Apr 9 20:22:55 node2 crmd: [30192]: info: notify_mount2d: Transition 234 status: done - <null>
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:22:55 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
- Apr 9 20:22:55 node2 pengine: [30361]: ERROR: process_pe_message: Transition 234: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-8.bz2
- Apr 9 20:23:05 node2 lrmd: [30189]: WARN: G_SIG_dispatch: Dispatch function for SIGCHLD was delayed 1000 ms (> 100 ms) before being called (GSource: 0xc9cf70)
- Apr 9 20:23:05 node2 lrmd: [30189]: info: G_SIG_dispatch: started at 4348256773 should have started at 4348256673
- Apr 9 20:23:06 node2 cib: [30188]: info: cib_stats: Processed 294 operations (850.00us average, 0% utilization) in the last 10min
- Apr 9 20:30:08 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:30:08 node2 crmd: [30192]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2393
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/17, version=5.116.64): ok (rc=0)
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1343, version=5.116.65): ok (rc=0)
- Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2394
- Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.116.65) : Resource op removal
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Apr 9 20:30:08 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1346: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="116" num_updates="65" >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <configuration >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <crm_config >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334017005" id="cib-bootstrap-options-last-lrm-refresh" />
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </crm_config>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </configuration>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: - </cib>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <cib epoch="117" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="cibadmin" cib-last-written="Mon Apr 9 19:19:59 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <configuration >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <crm_config >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021408" />
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </crm_config>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </configuration>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib:diff: + </cib>
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1345, version=5.117.1): ok (rc=0)
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1347, version=5.117.1): ok (rc=0)
- Apr 9 20:30:08 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 3951_mount2_resource (internal) on node1
- Apr 9 20:30:08 node2 crmd: [30192]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:1_monitor_10000:684 for deleted resource p_drbd_vmstore:1
- Apr 9 20:30:08 node2 crmd: [30192]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:30:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021408-2395
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1348, version=5.117.2): ok (rc=0)
- Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.117.1) : Non-status change
- Apr 9 20:30:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1350, version=5.117.3): ok (rc=0)
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1346, ref=pe_calc-dc-1334021408-2396, seq=19, quorate=1
- Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_0, magic=0:0;34:195:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.2) : Resource op removal
- Apr 9 20:30:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_0, magic=0:0;34:195:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.2) : Resource op removal
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1351: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1352: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1353: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:30:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:30:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:30:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1353, ref=pe_calc-dc-1334021409-2397, seq=19, quorate=1
- Apr 9 20:30:09 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:30:09 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:30:09 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021408-2396 is obsolete
- Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 235: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-9.bz2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_fs_vmstore#011(node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Stop p_vm#011(node2)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:30:09 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 236: 74 actions in 74 synapses
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_te_invoke: Processing graph 236 (ref=pe_calc-dc-1334021409-2397) derived from /var/lib/pengine/pe-error-10.bz2
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 11: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[784] (pid 20250)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 1: cancel p_drbd_mount1:1_monitor_10000 on node2 (local)
- Apr 9 20:30:09 node2 lrmd: [30189]: info: cancel_op: operation monitor[765] on p_drbd_mount1:1 for client 30192, its parameters: CRM_meta_clone=[1] drbd_resource=[tools] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.5] CRM_meta_name=[monitor] CRM_meta_role=[Master] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_monitor_10000 from 1:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2400
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2400 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_monitor_10000 (1) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 97 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 7: cancel p_drbd_mount2:0_monitor_10000 on node2 (local)
- Apr 9 20:30:09 node2 lrmd: [30189]: info: cancel_op: operation monitor[692] on p_drbd_mount2:0 for client 30192, its parameters: CRM_meta_clone=[0] drbd_resource=[crm] CRM_meta_master_node_max=[1] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_master_max=[1] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.5] CRM_meta_name=[monitor] CRM_meta_role=[Master] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_monitor_10000 from 7:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2402
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2402 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_monitor_10000 (7) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 127 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 137 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_monitor_10000 (call=765, status=1, cib-update=0, confirmed=true) Cancelled
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_10000 (call=692, status=1, cib-update=0, confirmed=true) Cancelled
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 163: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=163:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[785] (pid 20253)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 170: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=170:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[786] (pid 20254)
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[786] on p_drbd_mount2:0 for client 30192: pid 20254 exited with return code 0
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 170:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2405
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2405 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (170) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=786, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 128 fired and confirmed
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[785] on p_drbd_mount1:1 for client 30192: pid 20253 exited with return code 0
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 163:236:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2406
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2406 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (163) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=785, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 98 fired and confirmed
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation monitor[784] on p_drbd_vmstore:1 for client 30192: pid 20250 exited with return code 8
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=784, rc=8, cib-update=1357, confirmed=true) master
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: status_from_rc: Action 11 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
- Apr 9 20:30:09 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.6) : Event failed
- Apr 9 20:30:09 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
- Apr 9 20:30:09 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (11) confirmed on node2 (rc=4)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: probe_complete probe_complete on node2 (local) - no waiting
- Apr 9 20:30:09 node2 crmd: [30192]: info: run_graph: ====================================================
- Apr 9 20:30:09 node2 crmd: [30192]: notice: run_graph: Transition 236 (Complete=16, Pending=0, Fired=0, Skipped=33, Incomplete=25, Source=/var/lib/pengine/pe-error-10.bz2): Stopped
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_graph_trigger: Transition 236 is now complete
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke: Query 1358: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1358, ref=pe_calc-dc-1334021409-2408, seq=19, quorate=1
- Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 236: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-10.bz2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:09 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:30:09 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:30:09 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:30:09 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: destroy_action: Cancelling timer for action 1 (src=2331)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: destroy_action: Cancelling timer for action 7 (src=2332)
- Apr 9 20:30:09 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 237: 60 actions in 60 synapses
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_te_invoke: Processing graph 237 (ref=pe_calc-dc-1334021409-2408) derived from /var/lib/pengine/pe-error-11.bz2
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[787] (pid 20381)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[788] (pid 20382)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:30:09 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[789] (pid 20383)
- Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[787] on p_drbd_vmstore:1 for client 30192: pid 20381 exited with return code 0
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2412
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2412 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=787, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
- Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[788] on p_drbd_mount1:1 for client 30192: pid 20382 exited with return code 0
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2413
- Apr 9 20:30:09 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2413 from node2
- Apr 9 20:30:09 node2 lrmd: [30189]: info: operation notify[789] on p_drbd_mount2:0 for client 30192: pid 20383 exited with return code 0
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=788, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 78 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:237:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021409-2414
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021409-2414 from node2
- Apr 9 20:30:09 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=789, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
- Apr 9 20:30:09 node2 crmd: [30192]: notice: run_graph: ====================================================
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: run_graph: Transition 237 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-11.bz2): Terminated
- Apr 9 20:30:09 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Graph 237 (60 actions in 60 synapses): batch-limit=30 jobs, network-delay=60000ms
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 43]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 70]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 25 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 77]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 29 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 106]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 44 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 46 was confirmed (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:09 node2 crmd: [30192]: info: te_graph_trigger: Transition 237 is now complete
- Apr 9 20:30:09 node2 crmd: [30192]: info: notify_mount2d: Transition 237 status: done - <null>
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:30:09 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
- Apr 9 20:30:09 node2 pengine: [30361]: ERROR: process_pe_message: Transition 237: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-11.bz2
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.117.6): ok (rc=0)
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.117.6): ok (rc=0)
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/18, version=5.117.7): ok (rc=0)
- Apr 9 20:30:10 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.117.7) : Resource op removal
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Apr 9 20:30:10 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke: Query 1359: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="117" num_updates="7" >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <configuration >
- Apr 9 20:30:10 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.118.1) : Non-status change
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <crm_config >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke: Query 1360: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021408" id="cib-bootstrap-options-last-lrm-refresh" />
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </crm_config>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </configuration>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: - </cib>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <cib epoch="118" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Mon Apr 9 20:30:08 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <configuration >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <crm_config >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021409" />
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </crm_config>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </configuration>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib:diff: + </cib>
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/20, version=5.118.1): ok (rc=0)
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=node1/crmd/21, version=5.118.2): ok (rc=0)
- Apr 9 20:30:10 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/23, version=5.118.3): ok (rc=0)
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1360, ref=pe_calc-dc-1334021410-2415, seq=19, quorate=1
- Apr 9 20:30:10 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:30:10 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:30:10 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:30:10 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:30:10 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:30:10 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:30:10 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 238: 62 actions in 62 synapses
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_te_invoke: Processing graph 238 (ref=pe_calc-dc-1334021410-2415) derived from /var/lib/pengine/pe-error-12.bz2
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:0_monitor_0 on node1
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 143 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 156: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=156:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[790] (pid 20485)
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 167: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=167:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[791] (pid 20486)
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_rsc_command: Initiating action 177: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:30:10 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=177:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:30:10 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[792] (pid 20487)
- Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[792] on p_drbd_mount2:0 for client 30192: pid 20487 exited with return code 0
- Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[791] on p_drbd_mount1:1 for client 30192: pid 20486 exited with return code 0
- Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 177:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2420
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2420 from node2
- Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (177) confirmed on node2 (rc=0)
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=792, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 167:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2421
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2421 from node2
- Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (167) confirmed on node2 (rc=0)
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=791, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 110 fired and confirmed
- Apr 9 20:30:10 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:30:10 node2 lrmd: [30189]: info: operation notify[790] on p_drbd_vmstore:1 for client 30192: pid 20485 exited with return code 0
- Apr 9 20:30:10 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 156:238:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021410-2422
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021410-2422 from node2
- Apr 9 20:30:10 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (156) confirmed on node2 (rc=0)
- Apr 9 20:30:10 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=790, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
- Apr 9 20:30:10 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
- Apr 9 20:30:10 node2 pengine: [30361]: ERROR: process_pe_message: Transition 238: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-12.bz2
- Apr 9 20:30:11 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (10) confirmed on node1 (rc=0)
- Apr 9 20:30:11 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node1 - no waiting
- Apr 9 20:30:11 node2 crmd: [30192]: notice: run_graph: ====================================================
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: run_graph: Transition 238 (Complete=17, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-12.bz2): Terminated
- Apr 9 20:30:11 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Graph 238 (62 actions in 62 synapses): batch-limit=30 jobs, network-delay=60000ms
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 25]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 27]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 24]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 26]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 33]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 35]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 32]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 34]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 155]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 39]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 11 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 12 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 157]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 13 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 44]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 50]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 155]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 157]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 16 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 46]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 18 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 19 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 46]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 38]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 20 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 166]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 168]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 26 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 78]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 166]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 168]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 31 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 32 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 80]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 72]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 33 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 34 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 35 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 179]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 109]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 115]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 179]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 41 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 43 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 111]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 108]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 45 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 144]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 47 was confirmed (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 142]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 60 was confirmed (priority: 1000000)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_graph: Synapse 61 is pending (priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:30:11 node2 crmd: [30192]: info: te_graph_trigger: Transition 238 is now complete
- Apr 9 20:30:11 node2 crmd: [30192]: info: notify_mount2d: Transition 238 status: done - <null>
- Apr 9 20:30:11 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:30:11 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
- Apr 9 20:31:58 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:31:58 node2 crmd: [30192]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2424
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/27, version=5.118.6): ok (rc=0)
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1362, version=5.118.7): ok (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2425
- Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.118.7) : Resource op removal
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1365: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="118" num_updates="7" >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <configuration >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <crm_config >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021409" id="cib-bootstrap-options-last-lrm-refresh" />
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </crm_config>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </configuration>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: - </cib>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <cib epoch="119" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node1" update-client="crmd" cib-last-written="Mon Apr 9 20:30:10 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <configuration >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <crm_config >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021518" />
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </crm_config>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </configuration>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib:diff: + </cib>
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1364, version=5.119.1): ok (rc=0)
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1366, version=5.119.1): ok (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 4363_mount2_resource (internal) on node1
- Apr 9 20:31:58 node2 crmd: [30192]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-2426
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1367, version=5.119.2): ok (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.119.1) : Non-status change
- Apr 9 20:31:58 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1369, version=5.119.3): ok (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1365, ref=pe_calc-dc-1334021518-2427, seq=19, quorate=1
- Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.2) : Resource op removal
- Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;11:236:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.2) : Resource op removal
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1370: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1371: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1372: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1372, ref=pe_calc-dc-1334021518-2428, seq=19, quorate=1
- Apr 9 20:31:58 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:31:58 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:31:58 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021518-2427 is obsolete
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 239: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-13.bz2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_libvirt-bin#011(node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_fs_vmstore#011(node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Stop p_vm#011(node2)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:31:58 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 240: 72 actions in 72 synapses
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_te_invoke: Processing graph 240 (ref=pe_calc-dc-1334021518-2428) derived from /var/lib/pengine/pe-error-14.bz2
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[793] (pid 24510)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 95 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 125 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 135 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 42 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 161: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=161:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[794] (pid 24511)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 168: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=168:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[795] (pid 24514)
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[794] on p_drbd_mount1:1 for client 30192: pid 24511 exited with return code 0
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[795] on p_drbd_mount2:0 for client 30192: pid 24514 exited with return code 0
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 161:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2432
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2432 from node2
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (161) confirmed on node2 (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=794, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 168:240:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2433
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2433 from node2
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (168) confirmed on node2 (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=795, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 96 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 126 fired and confirmed
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation monitor[793] on p_drbd_vmstore:1 for client 30192: pid 24510 exited with return code 8
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=793, rc=8, cib-update=1374, confirmed=true) master
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: status_from_rc: Action 9 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
- Apr 9 20:31:58 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.4) : Event failed
- Apr 9 20:31:58 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
- Apr 9 20:31:58 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (9) confirmed on node2 (rc=4)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 8: probe_complete probe_complete on node2 (local) - no waiting
- Apr 9 20:31:58 node2 crmd: [30192]: info: run_graph: ====================================================
- Apr 9 20:31:58 node2 crmd: [30192]: notice: run_graph: Transition 240 (Complete=14, Pending=0, Fired=0, Skipped=33, Incomplete=25, Source=/var/lib/pengine/pe-error-14.bz2): Stopped
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_graph_trigger: Transition 240 is now complete
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke: Query 1375: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1375, ref=pe_calc-dc-1334021518-2435, seq=19, quorate=1
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 240: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-14.bz2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:31:58 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:31:58 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:31:58 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:31:58 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 241: 60 actions in 60 synapses
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_te_invoke: Processing graph 241 (ref=pe_calc-dc-1334021518-2435) derived from /var/lib/pengine/pe-error-15.bz2
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 142 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 155: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=155:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[796] (pid 24581)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 166: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=166:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[797] (pid 24582)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_rsc_command: Initiating action 176: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=176:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:31:58 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[798] (pid 24583)
- Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[796] on p_drbd_vmstore:1 for client 30192: pid 24581 exited with return code 0
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 155:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2439
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2439 from node2
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (155) confirmed on node2 (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=796, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
- Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:58 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[797] on p_drbd_mount1:1 for client 30192: pid 24582 exited with return code 0
- Apr 9 20:31:58 node2 lrmd: [30189]: info: operation notify[798] on p_drbd_mount2:0 for client 30192: pid 24583 exited with return code 0
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 166:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2440
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2440 from node2
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (166) confirmed on node2 (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=797, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:58 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 176:241:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021518-2441
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021518-2441 from node2
- Apr 9 20:31:58 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (176) confirmed on node2 (rc=0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=798, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 78 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 109 fired and confirmed
- Apr 9 20:31:58 node2 crmd: [30192]: notice: run_graph: ====================================================
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: run_graph: Transition 241 (Complete=15, Pending=0, Fired=0, Skipped=0, Incomplete=45, Source=/var/lib/pengine/pe-error-15.bz2): Terminated
- Apr 9 20:31:58 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Graph 241 (60 actions in 60 synapses): batch-limit=30 jobs, network-delay=60000ms
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 8 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 pengine: [30361]: ERROR: process_pe_message: Transition 241: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-15.bz2
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 9 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 38]: Pending (id: p_drbd_vmstore:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 12 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 43]: Pending (id: p_drbd_vmstore:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 49]: Pending (id: ms_drbd_vmstore_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 154]: Pending (id: p_drbd_vmstore:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 156]: Pending (id: p_drbd_vmstore:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 15 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 48]: Pending (id: ms_drbd_vmstore_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 47]: Completed (id: ms_drbd_vmstore_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 16 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 17 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 45]: Pending (id: ms_drbd_vmstore_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 37]: Pending (id: p_drbd_vmstore:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 44]: Completed (id: ms_drbd_vmstore_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 20 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 70]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 22 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 24 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 25 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 77]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 83]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 165]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 167]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 82]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 29 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 30 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 31 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 71]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 78]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 106]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 36 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 114]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 177]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 178]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 113]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 41 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 42 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 107]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 44 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 46 was confirmed (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 143]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 50 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 51 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 52 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 53 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 54 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 55 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 56 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 57 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 58 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_graph: Synapse 59 is pending (priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 133]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 134]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:31:58 node2 crmd: [30192]: info: te_graph_trigger: Transition 241 is now complete
- Apr 9 20:31:58 node2 crmd: [30192]: info: notify_mount2d: Transition 241 status: done - <null>
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:31:58 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
- Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.119.4): ok (rc=0)
- Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/28, version=5.119.5): ok (rc=0)
- Apr 9 20:31:59 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_0, magic=0:7;10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.5) : Resource op removal
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Apr 9 20:31:59 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_pe_invoke: Query 1376: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/30, version=5.119.6): ok (rc=0)
- Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=node1/crmd/31, version=5.119.7): ok (rc=0)
- Apr 9 20:31:59 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node1/crmd/33, version=5.119.8): ok (rc=0)
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1376, ref=pe_calc-dc-1334021519-2442, seq=19, quorate=1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:31:59 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:31:59 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:31:59 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:0#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:31:59 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:31:59 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 242: 62 actions in 62 synapses
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_te_invoke: Processing graph 242 (ref=pe_calc-dc-1334021519-2442) derived from /var/lib/pengine/pe-error-16.bz2
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 26 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:0_monitor_0 on node1
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 81 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 112 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 143 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 156: notify p_drbd_vmstore:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=156:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_notify_0 )
- Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 notify[799] (pid 24759)
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 167: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=167:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[800] (pid 24760)
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_rsc_command: Initiating action 177: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:31:59 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=177:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:31:59 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[801] (pid 24761)
- Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[800] on p_drbd_mount1:1 for client 30192: pid 24760 exited with return code 0
- Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 167:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2447
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2447 from node2
- Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (167) confirmed on node2 (rc=0)
- Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_vmstore:1:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=800, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[799] on p_drbd_vmstore:1 for client 30192: pid 24759 exited with return code 0
- Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_notify_0 from 156:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2448
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2448 from node2
- Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (156) confirmed on node2 (rc=0)
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_notify_0 (call=799, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 79 fired and confirmed
- Apr 9 20:31:59 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:31:59 node2 lrmd: [30189]: info: operation notify[801] on p_drbd_mount2:0 for client 30192: pid 24761 exited with return code 0
- Apr 9 20:31:59 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 177:242:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021519-2449
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021519-2449 from node2
- Apr 9 20:31:59 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (177) confirmed on node2 (rc=0)
- Apr 9 20:31:59 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=801, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
- Apr 9 20:31:59 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 110 fired and confirmed
- Apr 9 20:31:59 node2 pengine: [30361]: ERROR: process_pe_message: Transition 242: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-16.bz2
- Apr 9 20:32:00 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node1/node1/(null), version=5.119.8): ok (rc=0)
- Apr 9 20:32:00 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5-master-p_drbd_vmstore.0, name=master-p_drbd_vmstore:0, value=10000, magic=NA, cib=5.119.10) : Transient attribute: update
- Apr 9 20:32:00 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
- Apr 9 20:32:00 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
- Apr 9 20:32:00 node2 crmd: [30192]: WARN: status_from_rc: Action 10 (p_drbd_vmstore:0_monitor_0) on node1 failed (target: 7 vs. rc: 0): Error
- Apr 9 20:32:00 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_failure_0, magic=0:0;10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.11) : Event failed
- Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_0 (10) confirmed on node1 (rc=4)
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node1 - no waiting
- Apr 9 20:32:00 node2 crmd: [30192]: info: run_graph: ====================================================
- Apr 9 20:32:00 node2 crmd: [30192]: notice: run_graph: Transition 242 (Complete=17, Pending=0, Fired=0, Skipped=28, Incomplete=17, Source=/var/lib/pengine/pe-error-16.bz2): Stopped
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_graph_trigger: Transition 242 is now complete
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:32:00 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_pe_invoke: Query 1377: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1377, ref=pe_calc-dc-1334021520-2451, seq=19, quorate=1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:00 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:32:00 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:32:00 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:0#011(Slave node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:32:00 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:32:00 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 243: 50 actions in 50 synapses
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_te_invoke: Processing graph 243 (ref=pe_calc-dc-1334021520-2451) derived from /var/lib/pengine/pe-error-17.bz2
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 40: monitor p_drbd_vmstore:0_monitor_20000 on node1
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 45: monitor p_drbd_vmstore:1_monitor_10000 on node2 (local)
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=45:243:8:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_10000 )
- Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 monitor[802] (pid 24866)
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 82 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 113 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 144 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 172: notify p_drbd_mount1:1_pre_notify_start_0 on node2 (local)
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=172:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[803] (pid 24867)
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_rsc_command: Initiating action 182: notify p_drbd_mount2:0_pre_notify_start_0 on node2 (local)
- Apr 9 20:32:00 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=182:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:32:00 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[804] (pid 24868)
- Apr 9 20:32:00 node2 lrmd: [30189]: info: RA output: (p_drbd_mount1:1:notify:stdout) drbdsetup 1 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:32:00 node2 lrmd: [30189]: info: operation notify[803] on p_drbd_mount1:1 for client 30192: pid 24867 exited with return code 0
- Apr 9 20:32:00 node2 lrmd: [30189]: info: RA output: (p_drbd_mount2:0:notify:stdout) drbdsetup 2 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:32:00 node2 lrmd: [30189]: info: operation notify[804] on p_drbd_mount2:0 for client 30192: pid 24868 exited with return code 0
- Apr 9 20:32:00 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 172:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021520-2456
- Apr 9 20:32:00 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021520-2456 from node2
- Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (172) confirmed on node2 (rc=0)
- Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=803, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:32:00 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 182:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021520-2457
- Apr 9 20:32:00 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021520-2457 from node2
- Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (182) confirmed on node2 (rc=0)
- Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=804, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 83 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 80 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 114 fired and confirmed
- Apr 9 20:32:00 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 111 fired and confirmed
- Apr 9 20:32:00 node2 lrmd: [30189]: info: operation monitor[802] on p_drbd_vmstore:1 for client 30192: pid 24866 exited with return code 8
- Apr 9 20:32:00 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_10000 (call=802, rc=8, cib-update=1378, confirmed=false) master
- Apr 9 20:32:00 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_10000 (45) confirmed on node2 (rc=0)
- Apr 9 20:32:00 node2 pengine: [30361]: ERROR: process_pe_message: Transition 243: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-17.bz2
- Apr 9 20:32:02 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_20000 (40) confirmed on node1 (rc=0)
- Apr 9 20:32:02 node2 crmd: [30192]: notice: run_graph: ====================================================
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: run_graph: Transition 243 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=37, Source=/var/lib/pengine/pe-error-17.bz2): Terminated
- Apr 9 20:32:02 node2 crmd: [30192]: ERROR: te_graph_trigger: Transition failed: terminated
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Graph 243 (50 actions in 50 synapses): batch-limit=30 jobs, network-delay=60000ms
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 0 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 24]: Pending (id: p_sysadmin_notify:1_monitor_10000, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 1 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 2 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 26]: Pending (id: cl_sysadmin_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 23]: Pending (id: p_sysadmin_notify:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 25]: Completed (id: cl_sysadmin_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 3 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 4 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 32]: Pending (id: p_ping:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 5 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 6 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 34]: Pending (id: cl_ping_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 31]: Pending (id: p_ping:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 33]: Completed (id: cl_ping_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 7 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 8 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 9 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 10 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 72]: Pending (id: stonithnode2_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 11 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 171]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 12 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 74]: Pending (id: p_drbd_mount1:0_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 13 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 14 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 173]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 15 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 16 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 79]: Pending (id: p_drbd_mount1:1_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 17 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 85]: Pending (id: ms_drbd_mount1_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 171]: Pending (id: p_drbd_mount1:0_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 173]: Pending (id: p_drbd_mount1:1_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 18 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 84]: Pending (id: ms_drbd_mount1_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 81]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 83]: Completed (id: ms_drbd_mount1_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 19 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 20 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 21 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 81]: Pending (id: ms_drbd_mount1_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 73]: Pending (id: p_drbd_mount1:0_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 80]: Completed (id: ms_drbd_mount1_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 22 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 23 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 183]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 24 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 25 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 108]: Pending (id: p_drbd_mount2:0_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 26 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 184]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 27 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 110]: Pending (id: p_drbd_mount2:1_monitor_20000, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 28 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 29 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 116]: Pending (id: ms_drbd_mount2_confirmed-post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 183]: Pending (id: p_drbd_mount2:0_post_notify_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 184]: Pending (id: p_drbd_mount2:1_post_notify_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 30 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 115]: Pending (id: ms_drbd_mount2_post_notify_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 112]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 114]: Completed (id: ms_drbd_mount2_confirmed-pre_notify_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 31 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 32 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 33 is pending (priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 112]: Pending (id: ms_drbd_mount2_running_0, type: pseduo, priority: 1000000)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 109]: Pending (id: p_drbd_mount2:1_start_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 111]: Completed (id: ms_drbd_mount2_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 34 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 35 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 145]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 36 was confirmed (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 37 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 143]: Pending (id: g_vm_running_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 38 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 145]: Pending (id: g_vm_stopped_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 39 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 40 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 41 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 42 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 1]: Pending (id: p_libvirt-bin_monitor_30000, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 43 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 137]: Pending (id: p_libvirt-bin_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 44 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 45 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 2]: Pending (id: p_fs_vmstore_monitor_20000, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 46 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 139]: Pending (id: p_fs_vmstore_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 142]: Pending (id: g_vm_start_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 47 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 7]: Pending (id: probe_complete, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 144]: Completed (id: g_vm_stop_0, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 48 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 4]: Pending (id: p_vm_monitor_10000, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 141]: Pending (id: p_vm_start_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_graph: Synapse 49 is pending (priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: [Action 6]: Pending (id: all_stopped, type: pseduo, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 135]: Pending (id: p_libvirt-bin_stop_0, loc: node1, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 136]: Pending (id: p_libvirt-bin_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 138]: Pending (id: p_fs_vmstore_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: WARN: print_elem: * [Input 140]: Pending (id: p_vm_stop_0, loc: node2, priority: 0)
- Apr 9 20:32:02 node2 crmd: [30192]: info: te_graph_trigger: Transition 243 is now complete
- Apr 9 20:32:02 node2 crmd: [30192]: info: notify_mount2d: Transition 243 status: done - <null>
- Apr 9 20:32:02 node2 crmd: [30192]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_mount2d ]
- Apr 9 20:32:02 node2 crmd: [30192]: info: do_state_transition: Starting PEngine Recheck Timer
- Apr 9 20:32:08 node2 crmd: [30192]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:32:08 node2 crmd: [30192]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2458
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=node1/crmd/37, version=5.119.15): ok (rc=0)
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:0'] (origin=local/crmd/1379, version=5.119.16): ok (rc=0)
- Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2459
- Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:0_last_failure_0, magic=0:0;10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.119.16) : Resource op removal
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Apr 9 20:32:08 node2 crmd: [30192]: WARN: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1382: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <cib admin_epoch="5" epoch="119" num_updates="16" >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <configuration >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <crm_config >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - <nvpair value="1334021518" id="cib-bootstrap-options-last-lrm-refresh" />
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </cluster_property_set>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </crm_config>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </configuration>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: - </cib>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <cib epoch="120" num_updates="1" admin_epoch="5" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="node2" update-client="crmd" cib-last-written="Mon Apr 9 20:31:58 2012" have-quorum="1" dc-uuid="645e09b4-aee5-4cec-a241-8bd4e03a78c3" >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <configuration >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <crm_config >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" >
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1334021528" />
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </cluster_property_set>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </crm_config>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </configuration>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib:diff: + </cib>
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1381, version=5.120.1): ok (rc=0)
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1383, version=5.120.1): ok (rc=0)
- Apr 9 20:32:08 node2 crmd: [30192]: info: delete_resource: Removing resource p_drbd_vmstore:1 for 4463_mount2_resource (internal) on node1
- Apr 9 20:32:08 node2 crmd: [30192]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:1_monitor_10000:802 for deleted resource p_drbd_vmstore:1
- Apr 9 20:32:08 node2 crmd: [30192]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:32:08 node2 crmd: [30192]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-2460
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='p_drbd_vmstore:1'] (origin=local/crmd/1384, version=5.120.2): ok (rc=0)
- Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=5.120.1) : Non-status change
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1382, ref=pe_calc-dc-1334021528-2461, seq=19, quorate=1
- Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.2) : Resource op removal
- Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: te_update_diff:291 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;9:240:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.2) : Resource op removal
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1387: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1388: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke: Query 1389: Requesting the current CIB: S_POLICY_ENGINE
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:32:08 node2 cib: [30188]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/1386, version=5.120.3): ok (rc=0)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:1_last_failure_0 found resource p_drbd_vmstore:1 active in master mode on node2
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:0#011(Slave node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_vmstore:1#011(Master node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount1:1#011(Master node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_drbd_mount2:0#011(Master node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_libvirt-bin#011(Started node1 -> node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_fs_vmstore#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_vm#011(Started node2)
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_pe_invoke_callback: Invoking the PE: query=1389, ref=pe_calc-dc-1334021528-2462, seq=19, quorate=1
- Apr 9 20:32:08 node2 crmd: [30192]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:08 node2 crmd: [30192]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:08 node2 crmd: [30192]: info: handle_response: pe_calc calculation pe_calc-dc-1334021528-2461 is obsolete
- Apr 9 20:32:08 node2 pengine: [30361]: ERROR: process_pe_message: Transition 244: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-18.bz2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_libvirt-bin_last_failure_0 found resource p_libvirt-bin active on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount1:1_last_failure_0 found resource p_drbd_mount1:1 active in master mode on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_ping:0_last_failure_0 found resource p_ping:0 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_sysadmin_notify:0_last_failure_0 found resource p_sysadmin_notify:0 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active in master mode on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: unpack_rsc_op: Operation stonithnode1_last_failure_0 found resource stonithnode1 active on node2
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_sysadmin_notify:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_ping:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_vmstore:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode1_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action stonithnode2_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount1:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_drbd_mount2:0_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_libvirt-bin_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_fs_vmstore_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: custom_action: Action p_vm_monitor_0 on quorumnode is unrunnable (pending)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_sysadmin_notify:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_ping:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_vmstore:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_vmstore:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount1:0 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount1:1 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_drbd_mount2:0 on node2
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_drbd_mount2:1 on node1
- Apr 9 20:32:08 node2 pengine: [30361]: ERROR: native_create_actions: Resource p_libvirt-bin (upstart::libvirt-bin) is active on 2 nodes attempting recovery
- Apr 9 20:32:08 node2 pengine: [30361]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (30s) for p_libvirt-bin on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (20s) for p_fs_vmstore on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: RecurringOp: Start recurring monitor (10s) for p_vm on node1
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:0#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_sysadmin_notify:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_sysadmin_notify:2#011(Stopped)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:0#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_ping:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave p_ping:2#011(Stopped)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_vmstore:0#011(Slave -> Master node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_vmstore:1#011(node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Leave stonithnode1#011(Started node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start stonithnode2#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount1:0#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_mount1:0#011(Stopped -> Master node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount1:1#011(Master -> Slave node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Demote p_drbd_mount2:0#011(Master -> Slave node2)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Start p_drbd_mount2:1#011(node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Promote p_drbd_mount2:1#011(Stopped -> Master node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Restart p_libvirt-bin#011(Started node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_fs_vmstore#011(Started node2 -> node1)
- Apr 9 20:32:08 node2 pengine: [30361]: notice: LogActions: Move p_vm#011(Started node2 -> node1)
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Apr 9 20:32:08 node2 crmd: [30192]: info: unpack_graph: Unpacked transition 245: 114 actions in 114 synapses
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_te_invoke: Processing graph 245 (ref=pe_calc-dc-1334021528-2462) derived from /var/lib/pengine/pe-error-19.bz2
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 27 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 1: cancel p_drbd_vmstore:0_monitor_20000 on node1
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 10: monitor p_drbd_vmstore:1_monitor_0 on node2 (local)
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=10:245:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:1_monitor_0 )
- Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_vmstore:1 probe[805] (pid 25288)
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 100 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 131 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 147 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 159: notify p_drbd_vmstore:0_pre_notify_start_0 on node1
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 179: notify p_drbd_mount1:1_pre_notify_demote_0 on node2 (local)
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=179:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:1_notify_0 )
- Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_mount1:1 notify[806] (pid 25289)
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 190: notify p_drbd_mount2:0_pre_notify_demote_0 on node2 (local)
- Apr 9 20:32:08 node2 crmd: [30192]: info: do_lrm_rsc_op: Performing key=190:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_notify_0 )
- Apr 9 20:32:08 node2 lrmd: [30189]: info: rsc:p_drbd_mount2:0 notify[807] (pid 25290)
- Apr 9 20:32:08 node2 lrmd: [30189]: info: operation notify[806] on p_drbd_mount1:1 for client 30192: pid 25289 exited with return code 0
- Apr 9 20:32:08 node2 lrmd: [30189]: info: operation notify[807] on p_drbd_mount2:0 for client 30192: pid 25290 exited with return code 0
- Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_notify_0 from 179:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-2468
- Apr 9 20:32:08 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-2468 from node2
- Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (179) confirmed on node2 (rc=0)
- Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount1:1_notify_0 (call=806, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:32:08 node2 crmd: [30192]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 190:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-2469
- Apr 9 20:32:08 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-2469 from node2
- Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (190) confirmed on node2 (rc=0)
- Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=807, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 101 fired and confirmed
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_pseudo_action: Pseudo action 132 fired and confirmed
- Apr 9 20:32:08 node2 pengine: [30361]: ERROR: process_pe_message: Transition 245: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-19.bz2
- Apr 9 20:32:08 node2 lrmd: [30189]: info: operation monitor[805] on p_drbd_vmstore:1 for client 30192: pid 25288 exited with return code 8
- Apr 9 20:32:08 node2 crmd: [30192]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=805, rc=8, cib-update=1391, confirmed=true) master
- Apr 9 20:32:08 node2 crmd: [30192]: WARN: status_from_rc: Action 10 (p_drbd_vmstore:1_monitor_0) on node2 failed (target: 7 vs. rc: 8): Error
- Apr 9 20:32:08 node2 crmd: [30192]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vmstore:1_last_failure_0, magic=0:8;10:245:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28, cib=5.120.4) : Event failed
- Apr 9 20:32:08 node2 crmd: [30192]: info: update_abort_priority: Abort priority upgraded from 0 to 1
- Apr 9 20:32:08 node2 crmd: [30192]: info: update_abort_priority: Abort action done superceeded by restart
- Apr 9 20:32:08 node2 crmd: [30192]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_0 (10) confirmed on node2 (rc=4)
- Apr 9 20:32:08 node2 crmd: [30192]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on node2 (local) - no waiting
- Apr 9 20:32:09 node2 crmd: [30192]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1334021528-16 from node1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement