Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:39:50 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:39:50 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 52: (null)
- Feb 10 23:39:50 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 52 (ref=pe_calc-dc-1360561190-115) derived from (null)
- Feb 10 23:39:50 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:39:50 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:39:50 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 280220e0-2334-45c8-af3d-670b559f0f4f (0)
- Feb 10 23:39:50 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:39:50 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:39:56 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.280220e0: Timer expired
- Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 6/59:52:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 6 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:39:56 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=280220e0-2334-45c8-af3d-670b559f0f4f) by client crmd.29811
- Feb 10 23:39:56 [29811] vcsquorum crmd: notice: run_graph: Transition 52 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:39:56 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 53: (null)
- Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 53 (ref=pe_calc-dc-1360561196-116) derived from (null)
- Feb 10 23:39:56 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:39:56 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:39:56 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4571a1e1-e2d9-4a15-98a8-6fdfebacde0e (0)
- Feb 10 23:39:56 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:39:56 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:02 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4571a1e1: Timer expired
- Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 7/59:53:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 7 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:02 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4571a1e1-e2d9-4a15-98a8-6fdfebacde0e) by client crmd.29811
- Feb 10 23:40:02 [29811] vcsquorum crmd: notice: run_graph: Transition 53 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:40:02 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 54: (null)
- Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 54 (ref=pe_calc-dc-1360561202-117) derived from (null)
- Feb 10 23:40:02 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:40:02 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:40:02 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4421ca1b-055e-4651-8c6d-14887a7f6b9f (0)
- Feb 10 23:40:02 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:40:02 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:08 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4421ca1b: Timer expired
- Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 8/59:54:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 8 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:08 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4421ca1b-055e-4651-8c6d-14887a7f6b9f) by client crmd.29811
- Feb 10 23:40:08 [29811] vcsquorum crmd: notice: run_graph: Transition 54 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:40:08 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 55: (null)
- Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 55 (ref=pe_calc-dc-1360561208-118) derived from (null)
- Feb 10 23:40:08 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:40:08 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:40:08 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 2c7d2267-17bb-4345-a7bf-084e31c6be8d (0)
- Feb 10 23:40:08 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:40:08 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:14 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.2c7d2267: Timer expired
- Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 9/59:55:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 9 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:14 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=2c7d2267-17bb-4345-a7bf-084e31c6be8d) by client crmd.29811
- Feb 10 23:40:14 [29811] vcsquorum crmd: notice: run_graph: Transition 55 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:40:14 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 56: (null)
- Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 56 (ref=pe_calc-dc-1360561214-119) derived from (null)
- Feb 10 23:40:14 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:40:14 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:40:14 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4937ed3c-30c2-4795-94e8-36f4e2c0fa52 (0)
- Feb 10 23:40:14 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:40:14 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:20 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4937ed3c: Timer expired
- Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 10/59:56:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 10 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:20 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4937ed3c-30c2-4795-94e8-36f4e2c0fa52) by client crmd.29811
- Feb 10 23:40:20 [29811] vcsquorum crmd: notice: run_graph: Transition 56 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:40:20 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 57: (null)
- Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 57 (ref=pe_calc-dc-1360561220-120) derived from (null)
- Feb 10 23:40:20 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:40:20 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:40:20 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 5b5d0013-9ded-4330-9f15-226eb95f061a (0)
- Feb 10 23:40:20 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:40:20 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:26 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.5b5d0013: Timer expired
- Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 11/59:57:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 11 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:26 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=5b5d0013-9ded-4330-9f15-226eb95f061a) by client crmd.29811
- Feb 10 23:40:26 [29811] vcsquorum crmd: notice: run_graph: Transition 57 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:26 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
- Feb 10 23:40:27 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 58: (null)
- Feb 10 23:40:27 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:40:27 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 58 (ref=pe_calc-dc-1360561226-121) derived from (null)
- Feb 10 23:40:27 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
- Feb 10 23:40:27 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
- Feb 10 23:40:27 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 42e5ef92-424e-4fae-a95a-02b55029880f (0)
- Feb 10 23:40:27 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
- Feb 10 23:40:27 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
- Feb 10 23:40:33 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.42e5ef92: Timer expired
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 12/59:58:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 12 for vcs0 failed (Timer expired): aborting transition.
- Feb 10 23:40:33 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=42e5ef92-424e-4fae-a95a-02b55029880f) by client crmd.29811
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: run_graph: Transition 58 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
- Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
- Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
- Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63836: quorum lost (1)
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1.example.com was not seen in the previous transition
- Feb 10 23:40:33 [29811] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1.example.com[2868982794] - state is now lost
- Feb 10 23:40:33 [29811] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now lost (was member)
- Feb 10 23:40:33 [26090] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63836) was formed.
- Feb 10 23:40:33 [26090] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 10 23:40:33 [29811] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63836: quorum still lost (1)
- Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/104, version=2.100.39): OK (rc=0)
- Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/105, version=2.100.40): OK (rc=0)
- Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/107, version=2.100.42): OK (rc=0)
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1442761718
- Feb 10 23:41:40 [29811] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63840: quorum still lost (2)
- Feb 10 23:41:40 [29811] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0[2852205578] - state is now member
- Feb 10 23:41:40 [29811] vcsquorum crmd: info: peer_update_callback: vcs0 is now member (was (null))
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63840) was formed.
- Feb 10 23:41:40 [29811] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1442761718
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7
- Feb 10 23:41:40 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/109, version=2.100.44): OK (rc=0)
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: a
- Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: c
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: e
- Feb 10 23:41:41 [29811] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63840: quorum acquired (2)
- Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[2.0] stonith-ng.-1442761718
- Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.755053578
- Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.1] stonith-ng.-1442761718
- Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 18
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 18
- Feb 10 23:41:41 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/111, version=2.100.46): OK (rc=0)
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1c
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1c
- Feb 10 23:41:41 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/112, version=2.100.47): OK (rc=0)
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1f
- Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1f
- Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Joined[2.0] cib.-1442761718
- Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Member[2.0] cib.755053578
- Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Member[2.1] cib.-1442761718
- Feb 10 23:41:41 [29805] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Joined[2.0] crmd.-1442761718
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.0] crmd.755053578
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.1] crmd.-1442761718
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: peer_update_callback: Node return implies stonith of vcs0 (action 59) completed
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/transient_attributes
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
- Feb 10 23:41:42 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63840
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
- Feb 10 23:41:42 [29811] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/114, version=2.100.49): OK (rc=0)
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 26
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 26
- Feb 10 23:41:42 [29805] vcsquorum cib: warning: cib_process_diff: Diff 2.100.0 -> 2.100.1 from vcs0 not applied to 2.100.49: current "num_updates" is greater than required
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a 2c 2e 30
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a 2e
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a
- Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=local/crmd/115, version=2.100.50): OK (rc=0)
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 34
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 34
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 37
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 38
- Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=2.100.51): OK (rc=0)
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 3c 3e 40 42 44
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 3e 42
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 42
- Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 42
- Feb 10 23:41:43 [29811] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
- Feb 10 23:41:43 [29811] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
- Feb 10 23:41:43 [29811] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 47
- Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 47
- Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 49
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs0[-1442761718] - expected state is now member
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_finalize: join-3: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/119, version=2.100.51): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 4d 4f 51 53 55 57
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 4f 53 57
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53 59
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_ack: join-3: Updating node state to member for vcsquorum
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_ack: join-3: Updating node state to member for vcs0
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5a 5c 5e
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5a 5e
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/120, version=2.100.52): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5e
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 63
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 63
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/121, version=2.100.53): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 66
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=vcs0/crmd/8, version=2.100.54): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 68
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/122, version=2.100.55): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6a 6c 6e
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6c
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6c
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 70 72 74 76 78
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72 76
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/124, version=2.100.57): OK (rc=0)
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7a
- Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7c
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 10 23:41:44 [29811] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 10 23:41:44 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
- Feb 10 23:41:44 [29809] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 10 23:41:44 [29809] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 10 23:41:44 [26090] vcsquorum corosync error [TOTEM ] Marking ringid 0 interface 192.168.1.45 FAULTY
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/126, version=2.100.59): OK (rc=0)
- Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/128, version=2.100.61): OK (rc=0)
- Feb 10 23:41:45 [26090] vcsquorum corosync notice [TOTEM ] Automatically recovered ring 0
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Unloading all Corosync service engines.
- Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync vote quorum service v1.0
- Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync configuration map access
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: cfg_connection_destroy: Connection destroyed
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 29811
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 10 23:41:46 [29811] vcsquorum crmd: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_shutdown_req: Sending shutdown request to vcsquorum
- Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync configuration service
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: crmd_ais_destroy: connection closed
- Feb 10 23:41:46 [29805] vcsquorum cib: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Feb 10 23:41:46 [29805] vcsquorum cib: error: cib_ais_destroy: Corosync connection lost! Exiting.
- Feb 10 23:41:46 [29805] vcsquorum cib: info: terminate_cib: cib_ais_destroy: Exiting fast...
- Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: crm_ipc_read: Connection to cib_rw failed
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: mainloop_gio_callback: Connection to cib_rw[0x9ca7e0] closed (I/O condition=17)
- Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29809] vcsquorum attrd: error: crm_ipc_read: Connection to cib_rw failed
- Feb 10 23:41:46 [29809] vcsquorum attrd: error: mainloop_gio_callback: Connection to cib_rw[0x23e2860] closed (I/O condition=17)
- Feb 10 23:41:46 [29809] vcsquorum attrd: error: attrd_cib_connection_destroy: Connection to the CIB terminated...
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: crm_ipc_read: Connection to cib_shm failed
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: mainloop_gio_callback: Connection to cib_shm[0xbc6490] closed (I/O condition=17)
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: crmd_cib_connection_destroy: Connection to the CIB terminated...
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_log: FSA: Input I_ERROR from crmd_cib_connection_destroy() received in state S_POLICY_ENGINE
- Feb 10 23:41:46 [29811] vcsquorum crmd: warning: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=crmd_cib_connection_destroy ]
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_recover: Action A_RECOVER (0000000001000000) not supported
- Feb 10 23:41:46 [29811] vcsquorum crmd: warning: do_election_vote: Not voting in election, we're in state S_RECOVERY
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_dc_release: DC role released
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: pe_ipc_destroy: Connection to the Policy Engine released
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_te_control: Transitioner is now inactive
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_shutdown: Disconnecting STONITH...
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_connection_destroy: connection destroyed
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrm_connection_destroy: LRM Connection disconnected
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_lrm_control: Disconnected from the LRM
- Feb 10 23:41:46 [29808] vcsquorum lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0x6bbc00 - name: crmd id: 30675725-cb57-4d05-a2da-34d192311002
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
- Feb 10 23:41:46 [29811] vcsquorum crmd: notice: terminate_cs_connection: Disconnecting from Corosync
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_cluster_disconnect: Disconnected from corosync
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_ha_control: Disconnected from the cluster
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_cib_control: Disconnecting CIB
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
- Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_exit: Could not recover from internal error
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_exit: [crmd] stopped (2)
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process cib exited (pid=29805, rc=64)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process attrd exited (pid=29809, rc=1)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process crmd exited (pid=29811, rc=2)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 29810
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: stonith_peer_ais_destroy: AIS connection terminated
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: stonith_shutdown: Terminating with 1 clients
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: main: Done
- Feb 10 23:41:46 [29808] vcsquorum lrmd: error: crm_ipc_read: Connection to stonith-ng failed
- Feb 10 23:41:46 [29808] vcsquorum lrmd: error: mainloop_gio_callback: Connection to stonith-ng[0x6c3830] closed (I/O condition=17)
- Feb 10 23:41:46 [29808] vcsquorum lrmd: error: stonith_connection_destroy_cb: LRMD lost STONITH connection
- Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: cpg_connection_destroy: Connection destroyed
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=29807, rc=0)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=29810, rc=0)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Feb 10 23:41:46 [29808] vcsquorum lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 10 23:41:46 [29808] vcsquorum lrmd: info: lrmd_shutdown: Terminating with 0 clients
- Feb 10 23:41:46 [29808] vcsquorum lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 29808
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=29808, rc=0)
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: main: Exiting pacemakerd
- Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync profile loading service
- Feb 10 23:41:46 [26090] vcsquorum corosync notice [MAIN ] Corosync Cluster Engine exiting normally
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] The network interface [192.168.1.45] is now up.
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync configuration map access [0]
- Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cmap
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync configuration service [1]
- Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cfg
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
- Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cpg
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync profile loading service [4]
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [QUORUM] Using quorum provider corosync_votequorum
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
- Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: votequorum
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
- Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: quorum
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] The network interface [192.168.7.45] is now up.
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63844) was formed.
- Feb 10 23:43:05 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
- Feb 10 23:43:06 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63852) was formed.
- Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
- Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
- Feb 10 23:43:06 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: read_config: User configured file based logging and explicitly disabled syslog.
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: main: Starting Pacemaker 1.1.8 (Build: 1f8858c): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart systemd corosync-native snmp libesmtp
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: main: Maximum core file size is: 18446744073709551615
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0xfa83b0 Node 755053578 now known as vcsquorum, was:
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1199 for process cib
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1201 for process stonith-ng
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1202 for process lrmd
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1203 for process attrd
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1204 for process pengine
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1205 for process crmd
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: main: Starting mainloop
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0x11ac800 Node 2868982794 now known as vcs1, was:
- Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0xfa8b90 Node 2852205578 now known as vcs0, was:
- Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Feb 10 23:43:07 [1203] vcsquorum attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 10 23:43:07 [1203] vcsquorum attrd: notice: main: Starting mainloop...
- Feb 10 23:43:07 [1199] vcsquorum cib: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 10 23:43:07 [1199] vcsquorum cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
- Feb 10 23:43:07 [1199] vcsquorum cib: info: validate_with_relaxng: Creating RNG parser context
- Feb 10 23:43:07 [1202] vcsquorum lrmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Feb 10 23:43:07 [1202] vcsquorum lrmd: info: qb_ipcs_us_publish: server name: lrmd
- Feb 10 23:43:07 [1202] vcsquorum lrmd: info: main: Starting
- Feb 10 23:43:07 [1205] vcsquorum crmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Feb 10 23:43:07 [1205] vcsquorum crmd: notice: main: CRM Git Version: 1f8858c
- Feb 10 23:43:07 [1205] vcsquorum crmd: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 10 23:43:07 [1205] vcsquorum crmd: info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
- Feb 10 23:43:07 [1199] vcsquorum cib: info: startCib: CIB Initialization completed successfully
- Feb 10 23:43:07 [1199] vcsquorum cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 10 23:43:07 [1199] vcsquorum cib: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_ro
- Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_rw
- Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_shm
- Feb 10 23:43:07 [1199] vcsquorum cib: info: cib_init: Starting cib mainloop
- Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Joined[0.0] cib.755053578
- Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.0] cib.755053578
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.1] cib.-1442761718
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.2] cib.-1425984502
- Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: notice: setup_cib: Watching for stonith topology changes
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: main: Starting stonith-ng mainloop
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[0.0] stonith-ng.755053578
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.0] stonith-ng.755053578
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.1] stonith-ng.-1442761718
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.2] stonith-ng.-1425984502
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_cib_control: CIB connection established
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcsquorum is now (null)
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: init_quorum_connection: Quorum acquired
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=2.100.1): OK (rc=0)
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_ha_control: Connected to the cluster
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: lrmd_api_connect: Connecting to lrmd
- Feb 10 23:43:08 [1202] vcsquorum lrmd: info: lrmd_ipc_accept: Accepting client connection: 0x2244e00 pid=1205 for uid=997 gid=0
- Feb 10 23:43:08 [1199] vcsquorum cib: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 10 23:43:08 [1199] vcsquorum cib: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_started: Delaying start, no membership data (0000000000100000)
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.101.2 -> 2.101.3 from vcs0 not applied to 2.100.1: current "epoch" is less than required
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_server_process_diff: Requesting re-sync from peer
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63852: quorum retained (3)
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcsquorum[755053578] - state is now member
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcsquorum is now member (was (null))
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2852205578
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs0.example.com' for nodeid 2852205578 from DNS
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0.example.com
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now (null)
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was (null))
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2868982794
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs1.example.com' for nodeid 2868982794 from DNS
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1.example.com
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now (null)
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1.example.com[2868982794] - state is now member
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now member (was (null))
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: qb_ipcs_us_publish: server name: crmd
- Feb 10 23:43:08 [1205] vcsquorum crmd: notice: do_started: The local CRM is operational
- Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs0: 15070d3a94b6c6b977ed638be996c276
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_replace: Replaced 2.100.1 with 2.101.3 from vcs0
- Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.100.1 -> 2.101.3 from vcs0
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.755053578
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.0] crmd.755053578
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.1] crmd.-1442761718
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0.example.com[-1442761718] - corosync-cpg is now online
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0.example.com/peer now has status [online] (DC=<null>)
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.2] crmd.-1425984502
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1.example.com[-1425984502] - corosync-cpg is now online
- Feb 10 23:43:09 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs1.example.com/peer now has status [online] (DC=<null>)
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
- Feb 10 23:43:29 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:43:29 [1205] vcsquorum crmd: crit: crm_get_peer: Node vcs0.example.com and vcs0 share the same cluster node id '2852205578'!
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node vcs0 now has id: 2852205578
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: peer_update_callback: vcs0 is now (null)
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 10 23:43:29 [1205] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs0[-1442761718]
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 10 23:43:29 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=<null>)
- Feb 10 23:43:29 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 2, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:44:50 [1199] vcsquorum cib: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 10 23:44:50 [1199] vcsquorum cib: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 10 23:45:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
- Feb 10 23:45:06 [1205] vcsquorum crmd: crit: crm_get_peer: Node vcs1.example.com and vcs1 share the same cluster node id '2868982794'!
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node vcs1 now has id: 2868982794
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: peer_update_callback: vcs1 is now (null)
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 10 23:45:06 [1205] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs1[-1425984502]
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs1[-1425984502] - corosync-cpg is now online
- Feb 10 23:45:06 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=<null>)
- Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 6 (current: 3, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 3, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 3, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:45:29 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 10 23:45:29 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_te_control: Registering TE UUID: 4e8cb7a7-66f8-4877-98c7-2c096796e92d
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: set_graph_functions: Setting custom graph functions
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_readwrite: We are now in R/W mode
- Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=2.101.12): OK (rc=0)
- Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=2.101.13): OK (rc=0)
- Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=2.101.14): OK (rc=0)
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63852
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-1: Waiting on 3 outstanding join acks
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/11, version=2.101.15): OK (rc=0)
- Feb 10 23:45:29 [1205] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcsquorum[755053578] - expected state is now member
- Feb 10 23:47:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
- Feb 10 23:47:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
- Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 4, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 4, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 4, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:47:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 10 23:47:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.15 -> 2.101.16 from vcs0 not applied to 2.101.16: current "num_updates" is greater than required
- Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.16 -> 2.101.17 from vcs0 not applied to 2.101.17: current "num_updates" is greater than required
- Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.17 -> 2.101.18 from vcs0 not applied to 2.101.18: current "num_updates" is greater than required
- Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.18 -> 2.101.19 from vcs0 not applied to 2.101.19: current "num_updates" is greater than required
- Feb 10 23:49:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/14, version=2.101.20): OK (rc=0)
- Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/15, version=2.101.21): OK (rc=0)
- Feb 10 23:49:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.19 -> 2.101.20 from vcs1 not applied to 2.101.21: current "num_updates" is greater than required
- Feb 10 23:49:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.20 -> 2.101.21 from vcs1 not applied to 2.101.21: current "num_updates" is greater than required
- Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=2.101.28): OK (rc=0)
- Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 3 outstanding join acks
- Feb 10 23:49:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
- Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
- Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 5, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 5, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:49:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 10 23:49:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
- Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/19, version=2.101.29): OK (rc=0)
- Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 14 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:51:06 [1205] vcsquorum crmd: info: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
- Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 6, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 15 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:51:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 10 23:51:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:51:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/21, version=2.101.38): OK (rc=0)
- Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/22, version=2.101.39): OK (rc=0)
- Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/24, version=2.101.40): OK (rc=0)
- Feb 10 23:51:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 3 outstanding join acks
- Feb 10 23:51:06 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/26, version=2.101.41): OK (rc=0)
- Feb 10 23:53:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
- Feb 10 23:53:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
- Feb 10 23:53:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 7, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 10 23:53:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 7, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 10 23:55:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 10 23:55:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:55:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/29, version=2.101.50): OK (rc=0)
- Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/30, version=2.101.51): OK (rc=0)
- Feb 10 23:55:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.49 -> 2.101.50 from vcs1 not applied to 2.101.51: current "num_updates" is greater than required
- Feb 10 23:55:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.50 -> 2.101.51 from vcs1 not applied to 2.101.51: current "num_updates" is greater than required
- Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/32, version=2.101.54): OK (rc=0)
- Feb 10 23:55:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-4: Waiting on 3 outstanding join acks
- Feb 10 23:55:06 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/34, version=2.101.55): OK (rc=0)
- Feb 10 23:58:06 [1205] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: crm_timer_popped: Welcomed: 2, Integrated: 1
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
- Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: 2 cluster nodes failed to respond to the join offer.
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 4
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 4
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_dc_join_finalize: join-4: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/37, version=2.101.55): OK (rc=0)
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/transient_attributes
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: update_attrd: Connecting to attrd... 5 retries remaining
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_dc_join_ack: join-4: Updating node state to member for vcsquorum
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/38, version=2.101.56): OK (rc=0)
- Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 09b83e9a5a32e2ad5270161ca7dc6a3c
- Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_process_replace: Replacement 2.101.55 from vcs1 not applied to 2.101.56: current num_updates is greater than the replacement
- Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_diff_notify: Update (client: crmd, call:54): 2.101.56 -> 2.101.55 (Update was older than existing configuration)
- Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.55 -> 2.101.56 from vcs1 not applied to 2.101.56: current "num_updates" is greater than required
- Feb 10 23:58:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.101.59
- Feb 10 23:58:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.102.1
- Feb 10 23:58:06 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
- Feb 10 23:58:06 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
- Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/transient_attributes (origin=local/crmd/39, version=2.102.4): OK (rc=0)
- Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/40, version=2.102.5): OK (rc=0)
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: Only 1 of 3 cluster nodes are eligible to run resources - continue 2
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 10 23:58:06 [1203] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 10 23:58:06 [1205] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=2.103.1) : Non-status change
- Feb 10 23:58:07 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.102.6
- Feb 10 23:58:07 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.103.1
- Feb 10 23:58:07 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
- Feb 10 23:58:07 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 10 23:58:07 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=2.103.1): OK (rc=0)
- Feb 10 23:58:07 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/44, version=2.103.3): OK (rc=0)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (pending)
- Feb 10 23:58:07 [1204] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
- Feb 10 23:58:07 [1204] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
- Feb 10 23:58:07 [1205] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 10 23:58:07 [1205] vcsquorum crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1360562287-25) derived from /var/lib/pacemaker/pengine/pe-input-872.bz2
- Feb 10 23:58:07 [1205] vcsquorum crmd: notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-872.bz2): Complete
- Feb 10 23:58:07 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 10 23:58:07 [1204] vcsquorum pengine: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-872.bz2
- Feb 11 00:00:23 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.103.3 -> 2.103.4 from vcs1 not applied to 2.103.4: current "num_updates" is greater than required
- Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 7, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 16 (current: 7, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:00:23 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.103.4 -> 2.103.5 from vcs0
- Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 8, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.103.4
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.103.5
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="4" />
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="103" num_updates="5" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Sun Feb 10 23:58:06 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 00:00:23 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.103.4 -> 2.103.5 from vcs1 not applied to 2.103.5: current "num_updates" is greater than required
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.103.5
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.104.1
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.104.2
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.1
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
- Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 11 00:00:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/47, version=2.105.1): OK (rc=0)
- Feb 11 00:00:23 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:00:23 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.2 -> 2.105.3 from vcs1 not applied to 2.105.3: current "num_updates" is greater than required
- Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.3 -> 2.105.4 from vcs0
- Feb 11 00:00:37 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.3
- Feb 11 00:00:37 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.4
- Feb 11 00:00:37 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="3" />
- Feb 11 00:00:37 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="4" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 00:00:37 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 9, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:00:37 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 9, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/49, version=2.105.5): OK (rc=0)
- Feb 11 00:00:37 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.3 -> 2.105.4 from vcs1 not applied to 2.105.6: current "num_updates" is greater than required
- Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.4 -> 2.106.1 from vcs1 not applied to 2.105.6: current "num_updates" is greater than required
- Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.1 -> 2.106.2 from vcs1 not applied to 2.105.6: current "epoch" is less than required
- Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:00:37 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.2 -> 2.106.3 from vcs1 not applied to 2.105.7: current "epoch" is less than required
- Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 9, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 17 (current: 9, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.7 -> 2.105.8 from vcs0
- Feb 11 00:00:56 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.7
- Feb 11 00:00:56 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.8
- Feb 11 00:00:56 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="7" />
- Feb 11 00:00:56 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="8" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 10, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.3 -> 2.106.4 from vcs1 not applied to 2.105.8: current "epoch" is less than required
- Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.4 -> 2.106.5 from vcs1 not applied to 2.105.8: current "epoch" is less than required
- Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.5 -> 2.106.6 from vcs1 not applied to 2.105.8: current "epoch" is less than required
- Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:00:57 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/51, version=2.105.9): OK (rc=0)
- Feb 11 00:00:57 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:00:57 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.6 -> 2.106.7 from vcs1 not applied to 2.105.11: current "epoch" is less than required
- Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 14 (current: 10, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 18 (current: 10, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.11 -> 2.105.12 from vcs0
- Feb 11 00:01:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.11
- Feb 11 00:01:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.12
- Feb 11 00:01:06 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="11" />
- Feb 11 00:01:06 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="12" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 11, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.7 -> 2.106.8 from vcs1 not applied to 2.105.12: current "epoch" is less than required
- Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.8 -> 2.106.9 from vcs1 not applied to 2.105.12: current "epoch" is less than required
- Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.9 -> 2.106.10 from vcs1 not applied to 2.105.12: current "epoch" is less than required
- Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/53, version=2.105.13): OK (rc=0)
- Feb 11 00:01:06 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:01:06 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:01:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 19 (current: 11, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.10 -> 2.106.11 from vcs1 not applied to 2.105.14: current "epoch" is less than required
- Feb 11 00:02:23 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 00:02:23 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:02:23 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.11 -> 2.106.12 from vcs1 not applied to 2.105.14: current "epoch" is less than required
- Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.12 -> 2.106.13 from vcs1 not applied to 2.105.14: current "epoch" is less than required
- Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.13 -> 2.106.14 from vcs1 not applied to 2.105.14: current "epoch" is less than required
- Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/55, version=2.105.15): OK (rc=0)
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/56, version=2.105.16): OK (rc=0)
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/58, version=2.105.17): OK (rc=0)
- Feb 11 00:02:23 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-5: Waiting on 3 outstanding join acks
- Feb 11 00:02:23 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/60, version=2.105.18): OK (rc=0)
- Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.14 -> 2.106.15 from vcs0 not applied to 2.105.18: current "epoch" is less than required
- Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:03:26 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
- Feb 11 00:03:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
- Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.15 -> 2.106.16 from vcs0 not applied to 2.105.18: current "epoch" is less than required
- Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:03:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 15 (current: 12, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:03:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 12, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:03:26 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.16 -> 2.106.17 from vcs0 not applied to 2.105.18: current "epoch" is less than required
- Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.17 -> 2.106.18 from vcs0 not applied to 2.105.18: current "epoch" is less than required
- Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:05:26 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 00:05:26 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:05:26 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/63, version=2.105.19): OK (rc=0)
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/64, version=2.105.20): OK (rc=0)
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.18 -> 2.106.19 from vcs1 not applied to 2.105.20: current "epoch" is less than required
- Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.19 -> 2.106.20 from vcs1 not applied to 2.105.20: current "epoch" is less than required
- Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.20 -> 2.106.21 from vcs1 not applied to 2.105.20: current "epoch" is less than required
- Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.21 -> 2.106.22 from vcs1 not applied to 2.105.20: current "epoch" is less than required
- Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/66, version=2.105.21): OK (rc=0)
- Feb 11 00:05:26 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-6: Waiting on 3 outstanding join acks
- Feb 11 00:05:26 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/68, version=2.105.22): OK (rc=0)
- Feb 11 00:06:47 [1205] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs0 (state=S_INTEGRATION)
- Feb 11 00:06:47 [1203] vcsquorum attrd: warning: get_corosync_uuid: Node vcs0 is not yet known by corosync
- Feb 11 00:06:47 [1203] vcsquorum attrd: warning: crm_get_peer: Cannot obtain a UUID for node 0/vcs0
- Feb 11 00:06:47 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.22 -> 2.106.23 from vcs1 not applied to 2.105.24: current "epoch" is less than required
- Feb 11 00:06:47 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:06:47 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.23 -> 2.106.24 from vcs1 not applied to 2.105.24: current "epoch" is less than required
- Feb 11 00:06:47 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:06:47 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:06:47 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:08:26 [1205] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: crm_timer_popped: Welcomed: 2, Integrated: 1
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:08:26 [1205] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
- Feb 11 00:08:26 [1205] vcsquorum crmd: warning: do_state_transition: 2 cluster nodes failed to respond to the join offer.
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 6
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 6
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_dc_join_finalize: join-6: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/71, version=2.105.24): OK (rc=0)
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_dc_join_ack: join-6: Updating node state to member for vcsquorum
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 34fe9008b3da5710da50d9618da3bfd5
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_replace: Replaced 2.105.24 with 2.106.24 from vcs1
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 2.106.24 from vcs1
- Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 2.106.24
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node_state join="down" id="2868982794" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node_state uname="vcsquorum" join="member" id="755053578" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" join="member" crm-debug-origin="do_cib_replaced" expected="member" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node_state id="755053578" uname="vcsquorum.example.com" in_ccm="true" crmd="online" join="down" crm-debug-origin="do_cib_replaced" expected="member" />
- Feb 11 00:08:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 13, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.106.30
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.107.1
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
- Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/72, version=2.107.1): OK (rc=0)
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/73, version=2.107.2): OK (rc=0)
- Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/75, version=2.107.4): OK (rc=0)
- Feb 11 00:08:26 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:08:26 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Left[1.0] crmd.-1442761718
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [offline] (DC=true)
- Feb 11 00:08:27 [1205] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
- Feb 11 00:08:27 [1205] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs0[-1442761718] - expected state is now down
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.0] crmd.755053578
- Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.1] crmd.-1425984502
- Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[1.0] stonith-ng.-1442761718
- Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.0] stonith-ng.755053578
- Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.1] stonith-ng.-1425984502
- Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Left[1.0] cib.-1442761718
- Feb 11 00:08:27 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[1.0] cib.755053578
- Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[1.1] cib.-1425984502
- Feb 11 00:10:26 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 00:10:26 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:10:26 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/78, version=2.107.10): OK (rc=0)
- Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/79, version=2.107.11): OK (rc=0)
- Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/81, version=2.107.12): OK (rc=0)
- Feb 11 00:10:26 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-7: Waiting on 3 outstanding join acks
- Feb 11 00:10:26 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/83, version=2.107.13): OK (rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 1205
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 11 00:11:40 [1205] vcsquorum crmd: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_shutdown_req: Sending shutdown request to vcsquorum
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcsquorum (state=S_INTEGRATION)
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1360563100)
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_perform_update: Sent update 16: shutdown=1360563100
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_ais_dispatch: Update relayed from vcs1
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1360563100)
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_perform_update: Sent update 18: shutdown=1360563100
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: handle_request: Shutting ourselves down (DC)
- Feb 11 00:11:40 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_STOP from route_message() received in state S_INTEGRATION
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_STOPPING [ input=I_STOP cause=C_HA_MESSAGE origin=route_message ]
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_dc_release: DC role released
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: pe_ipc_destroy: Connection to the Policy Engine released
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_te_control: Transitioner is now inactive
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_shutdown: Disconnecting STONITH...
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_connection_destroy: connection destroyed
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrm_connection_destroy: LRM Connection disconnected
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_lrm_control: Disconnected from the LRM
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
- Feb 11 00:11:40 [1202] vcsquorum lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0x2244e00 - name: crmd id: 7be9e918-747d-4ec8-9fdb-9537fb2d250f
- Feb 11 00:11:40 [1205] vcsquorum crmd: notice: terminate_cs_connection: Disconnecting from Corosync
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_cluster_disconnect: Disconnected from corosync
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_ha_control: Disconnected from the cluster
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_cib_control: Disconnecting CIB
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_exit: [crmd] stopped (0)
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_dc_release ]
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: free_mem: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process crmd exited (pid=1205, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 1204
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=1204, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping attrd: Sent -15 to process 1203
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: main: Exiting...
- Feb 11 00:11:40 [1203] vcsquorum attrd: notice: main: Disconnecting client 0x1168af0, pid=1205...
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process attrd exited (pid=1203, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 1202
- Feb 11 00:11:40 [1202] vcsquorum lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 11 00:11:40 [1202] vcsquorum lrmd: info: lrmd_shutdown: Terminating with 0 clients
- Feb 11 00:11:40 [1202] vcsquorum lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=1202, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping stonith-ng: Sent -15 to process 1201
- Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: stonith_shutdown: Terminating with 0 clients
- Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: main: Done
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=1201, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping cib: Sent -15 to process 1199
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_ipcs_send: Event 321 failed, size=1798, to=0x1942a80[1201], queue=1, rc=-32: <notify t="cib_notify" subt="cib_diff_notify" cib_op="cib_apply_diff" cib_rc="0" cib_object_type="diff"><cib_generation>
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Waiting on 2 clients to disconnect (1)
- Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Waiting on 1 clients to disconnect (0)
- Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: All clients disconnected (0)
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: notice: terminate_cs_connection: Disconnecting from Corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No Quorum connection
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnected from corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Exiting from mainloop...
- Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Disconnected 3 clients
- Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: All clients disconnected (0)
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: notice: terminate_cs_connection: Disconnecting from Corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No CPG connection
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No Quorum connection
- Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnected from corosync
- Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Exiting from mainloop...
- Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process cib exited (pid=1199, rc=0)
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: main: Exiting pacemakerd
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: read_config: User configured file based logging and explicitly disabled syslog.
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: main: Starting Pacemaker 1.1.8 (Build: 1f8858c): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart systemd corosync-native snmp libesmtp
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: main: Maximum core file size is: 18446744073709551615
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1d1f370 Node 755053578 now known as vcsquorum, was:
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2045 for process cib
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2047 for process stonith-ng
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2048 for process lrmd
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2049 for process attrd
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2050 for process pengine
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2051 for process crmd
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: main: Starting mainloop
- Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1f237c0 Node 2868982794 now known as vcs1, was:
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Feb 11 00:26:49 [2045] vcsquorum cib: notice: main: Using new config location: /var/lib/pacemaker/cib
- Feb 11 00:26:49 [2048] vcsquorum lrmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Feb 11 00:26:49 [2048] vcsquorum lrmd: info: qb_ipcs_us_publish: server name: lrmd
- Feb 11 00:26:49 [2048] vcsquorum lrmd: info: main: Starting
- Feb 11 00:26:49 [2045] vcsquorum cib: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 11 00:26:49 [2045] vcsquorum cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
- Feb 11 00:26:49 [2045] vcsquorum cib: warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
- Feb 11 00:26:49 [2045] vcsquorum cib: warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
- Feb 11 00:26:49 [2045] vcsquorum cib: warning: readCibXmlFile: Continuing with an empty configuration.
- Feb 11 00:26:49 [2045] vcsquorum cib: info: validate_with_relaxng: Creating RNG parser context
- Feb 11 00:26:49 [2049] vcsquorum attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
- Feb 11 00:26:49 [2051] vcsquorum crmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Feb 11 00:26:49 [2051] vcsquorum crmd: notice: main: CRM Git Version: 1f8858c
- Feb 11 00:26:49 [2051] vcsquorum crmd: info: get_cluster_type: Cluster type is: 'corosync'
- Feb 11 00:26:49 [2051] vcsquorum crmd: info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
- Feb 11 00:26:49 [2049] vcsquorum attrd: notice: main: Starting mainloop...
- Feb 11 00:26:49 [2045] vcsquorum cib: info: startCib: CIB Initialization completed successfully
- Feb 11 00:26:49 [2045] vcsquorum cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 11 00:26:49 [2045] vcsquorum cib: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_ro
- Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_rw
- Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_shm
- Feb 11 00:26:49 [2045] vcsquorum cib: info: cib_init: Starting cib mainloop
- Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[0.0] cib.755053578
- Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[0.0] cib.755053578
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[0.1] cib.-1425984502
- Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: notice: setup_cib: Watching for stonith topology changes
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: main: Starting stonith-ng mainloop
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[0.0] stonith-ng.755053578
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.0] stonith-ng.755053578
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.1] stonith-ng.-1425984502
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_cib_control: CIB connection established
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 755053578
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: init_cs_connection_once: Connection to 'corosync': established
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 755053578 is now known as vcsquorum
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcsquorum is now (null)
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 755053578 has uuid 755053578
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: init_quorum_connection: Quorum acquired
- Feb 11 00:26:50 [2045] vcsquorum cib: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 11 00:26:50 [2045] vcsquorum cib: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.4.4 -> 0.4.5 from vcs1 not applied to 0.0.0: current "epoch" is less than required
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_server_process_diff: Requesting re-sync from peer
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 8fbc8cf845733fc66fed998ebf09acc6
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.0.0 with 0.4.5 from vcs1
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.0.0 -> 0.4.5 from vcs1
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_ha_control: Connected to the cluster
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: lrmd_api_connect: Connecting to lrmd
- Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=0.4.6): OK (rc=0)
- Feb 11 00:26:50 [2048] vcsquorum lrmd: info: lrmd_ipc_accept: Accepting client connection: 0x23d1c00 pid=2051 for uid=997 gid=0
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_started: Delaying start, no membership data (0000000000100000)
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63852: quorum retained (3)
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcsquorum[755053578] - state is now member
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcsquorum is now member (was (null))
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2852205578
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs0.example.com' for nodeid 2852205578 from DNS
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0.example.com
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now (null)
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was (null))
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2868982794
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2868982794
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs1.example.com' for nodeid 2868982794 from DNS
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1.example.com
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now (null)
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1.example.com[2868982794] - state is now member
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now member (was (null))
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: qb_ipcs_us_publish: server name: crmd
- Feb 11 00:26:50 [2051] vcsquorum crmd: notice: do_started: The local CRM is operational
- Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.755053578
- Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.0] crmd.755053578
- Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.1] crmd.-1425984502
- Feb 11 00:26:51 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1.example.com[-1425984502] - corosync-cpg is now online
- Feb 11 00:26:51 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1.example.com/peer now has status [online] (DC=<null>)
- Feb 11 00:27:11 [2051] vcsquorum crmd: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
- Feb 11 00:27:11 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
- Feb 11 00:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:29:11 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 00:29:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_te_control: Registering TE UUID: 6a6761a2-ec2f-492c-a18c-394db5ac6dfc
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: set_graph_functions: Setting custom graph functions
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_readwrite: We are now in R/W mode
- Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/7, version=0.4.8): OK (rc=0)
- Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/8, version=0.4.9): OK (rc=0)
- Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.4.10): OK (rc=0)
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63852
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.4.11): OK (rc=0)
- Feb 11 00:29:11 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcsquorum[755053578] - expected state is now member
- Feb 11 00:29:50 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 2d4b3f9280b830e9f7ebac276e35345c
- Feb 11 00:29:50 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.4.11 with 0.4.11 from vcs1
- Feb 11 00:29:50 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update probe_complete=(null) failed: No such device or address
- Feb 11 00:32:11 [2051] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: crm_timer_popped: Welcomed: 1, Integrated: 1
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
- Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: 1 cluster nodes failed to respond to the join offer.
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 1
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-1: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/15, version=0.4.18): OK (rc=0)
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/transient_attributes
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: update_attrd: Connecting to attrd... 5 retries remaining
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.4.18
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.5.1
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/16, version=0.5.1): OK (rc=0)
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_dc_join_ack: join-1: Updating node state to member for vcsquorum
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/transient_attributes (origin=local/crmd/17, version=0.5.2): OK (rc=0)
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/18, version=0.5.3): OK (rc=0)
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.6.1) : Non-status change
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.5.4
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.6.1
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1" id="2868982794" />
- Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1.example.com" />
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/20, version=0.6.1): OK (rc=0)
- Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/22, version=0.6.3): OK (rc=0)
- Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:32:11 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:32:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
- Feb 11 00:32:11 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1360564331-8) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
- Feb 11 00:32:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on vcsquorum (local) - no waiting
- Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:32:11 [2051] vcsquorum crmd: notice: run_graph: Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
- Feb 11 00:32:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_perform_update: Sent update 5: probe_complete=true
- Feb 11 00:32:11 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.4.18 -> 0.4.19 from vcs1 not applied to 0.6.4: current "epoch" is greater than required
- Feb 11 00:35:21 [2051] vcsquorum crmd: crit: crm_get_peer: Node vcs1.example.com and vcs1 share the same cluster node id '2868982794'!
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node vcs1 now has id: 2868982794
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now (null)
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
- Feb 11 00:35:21 [2051] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs1[-1425984502]
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs1[-1425984502] - corosync-cpg is now online
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=true)
- Feb 11 00:35:21 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2868982794
- Feb 11 00:35:21 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs1 not matched
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs1[-1425984502] - expected state is now down
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
- Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.4.19 -> 0.6.1 from vcs1 not applied to 0.6.4: current "epoch" is greater than required
- Feb 11 00:35:21 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
- Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 3 (current: 2, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:35:21 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:35:21 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
- Feb 11 00:35:21 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:35:21 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1360564521-10) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
- Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.6.1 -> 0.7.1 from vcs1 not applied to 0.6.5: current "num_updates" is greater than required
- Feb 11 00:35:21 [2051] vcsquorum crmd: notice: run_graph: Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
- Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:35:21 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.1 -> 0.7.2 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:35:21 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.2 -> 0.7.3 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.3 -> 0.7.4 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:06 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 4 (current: 2, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.4 -> 0.7.5 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.5 -> 0.7.6 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.6 -> 0.7.7 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.7 -> 0.7.8 from vcs1 not applied to 0.6.5: current "epoch" is less than required
- Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:36:23 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.8.1) : Non-status change
- Feb 11 00:36:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 00:36:23 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
- Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.6.5 -> 0.8.1 from <null>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.6.5
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.8.1
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <cluster_property_set id="cib-bootstrap-options" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.8-1f8858c" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </cluster_property_set>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="2868982794" uname="vcs1.example.com" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="755053578" uname="vcsquorum" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="2852205578" uname="vcs0.example.com" />
- Feb 11 00:36:23 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" crm-debug-origin="peer_update_callback" join="down" expected="member" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="2868982794" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-2868982794" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-probe_complete" name="probe_complete" value="true" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="2868982794" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="755053578" uname="vcsquorum" in_ccm="true" crmd="online" join="member" crm-debug-origin="do_state_transition" expected="member" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="755053578" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="755053578" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-755053578" >
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-755053578-probe_complete" name="probe_complete" value="true" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2852205578" uname="vcs0.example.com" in_ccm="true" crmd="offline" join="down" crm-debug-origin="do_state_transition" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:32:11 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_erase for section 'all' (origin=local/cibadmin/2, version=0.8.1): OK (rc=0)
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.9.1
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="8" num_updates="1" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2852205578" uname="vcs0.example.com" />
- Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/28, version=0.9.1): OK (rc=0)
- Feb 11 00:36:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:36:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.8 -> 0.7.9 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.9 -> 0.7.10 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:36:41 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 5 (current: 3, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
- Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.10 -> 0.7.11 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.11 -> 0.7.12 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.12 -> 0.7.13 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.13 -> 0.7.14 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.14 -> 0.7.15 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.15 -> 0.8.1 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.8.1 -> 0.9.1 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
- Feb 11 00:38:23 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 00:38:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:38:23 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/30, version=0.9.4): OK (rc=0)
- Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/31, version=0.9.5): OK (rc=0)
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.9.5
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.10.1
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="9" num_updates="5" />
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <cluster_property_set id="cib-bootstrap-options" >
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.8-1f8858c" />
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ </cluster_property_set>
- Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/34, version=0.10.1): OK (rc=0)
- Feb 11 00:38:23 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
- Feb 11 00:38:23 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.11.1
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="10" num_updates="1" />
- Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" />
- Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/37, version=0.11.1): OK (rc=0)
- Feb 11 00:39:56 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
- Feb 11 00:39:56 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63856: quorum retained (2)
- Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1[2868982794] - state is now member
- Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now member (was (null))
- Feb 11 00:39:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1.example.com was not seen in the previous transition
- Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1.example.com[2868982794] - state is now lost
- Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now lost (was member)
- Feb 11 00:39:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1442761718/vcs0.example.com was not seen in the previous transition
- Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs0.example.com[2852205578] - state is now lost
- Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now lost (was member)
- Feb 11 00:39:56 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:39:56 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.11.2): OK (rc=0)
- Feb 11 00:39:56 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:39:56 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63856) was formed.
- Feb 11 00:39:56 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63856
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
- Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=noop)
- Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=join_offer)
- Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=join_offer)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
- Feb 11 00:39:57 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.9.1 -> 0.9.2 from vcs1 not applied to 0.11.3: current "epoch" is greater than required
- Feb 11 00:39:57 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
- Feb 11 00:39:57 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
- Feb 11 00:39:57 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.9.2 -> 0.9.3 from vcs1 not applied to 0.11.3: current "epoch" is greater than required
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 6 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 7 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 8 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 00:39:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/44, version=0.11.4): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/45, version=0.11.5): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.11.5): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/47, version=0.11.6): OK (rc=0)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/49, version=0.11.7): OK (rc=0)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs1[-1425984502] - expected state is now member
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-5: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/52, version=0.11.8): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/53, version=0.11.9): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/54, version=0.11.10): OK (rc=0)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_ack: join-5: Updating node state to member for vcsquorum
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_ack: join-5: Updating node state to member for vcs1
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/55, version=0.11.12): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/57, version=0.11.14): OK (rc=0)
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:39:57 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:39:57 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/59, version=0.11.16): OK (rc=0)
- Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/61, version=0.11.19): OK (rc=0)
- Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:39:57 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:39:57 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-2.bz2
- Feb 11 00:39:57 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1360564797-34) derived from /var/lib/pacemaker/pengine/pe-input-2.bz2
- Feb 11 00:39:57 [2051] vcsquorum crmd: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Complete
- Feb 11 00:39:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:40:09 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
- Feb 11 00:40:09 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63864: quorum retained (3)
- Feb 11 00:40:09 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
- Feb 11 00:40:09 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was lost)
- Feb 11 00:40:09 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:40:09 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63864) was formed.
- Feb 11 00:40:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/63, version=0.11.22): OK (rc=0)
- Feb 11 00:40:09 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:40:09 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:40:14 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1d1fd20 Node 2852205578 now known as vcs0, was:
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[1.0] stonith-ng.-1442761718
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.0] stonith-ng.755053578
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.1] stonith-ng.-1442761718
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
- Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.2] stonith-ng.-1425984502
- Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[1.0] cib.-1442761718
- Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.0] cib.755053578
- Feb 11 00:40:14 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2852205578
- Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.1] cib.-1442761718
- Feb 11 00:40:14 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
- Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.2] cib.-1425984502
- Feb 11 00:40:15 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[1.0] crmd.-1442761718
- Feb 11 00:40:15 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.0] crmd.755053578
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.1] crmd.-1442761718
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0.example.com[-1442761718] - corosync-cpg is now online
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0.example.com/peer now has status [online] (DC=true)
- Feb 11 00:40:15 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.2] crmd.-1425984502
- Feb 11 00:40:15 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63864
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-6: Waiting on 3 outstanding join acks
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:40:15 [2045] vcsquorum cib: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 11 00:40:15 [2045] vcsquorum cib: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 11 00:40:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=0.11.24): OK (rc=0)
- Feb 11 00:40:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.0.0 -> 0.0.1 from vcs0 not applied to 0.11.24: current "epoch" is greater than required
- Feb 11 00:40:15 [2051] vcsquorum crmd: crit: crm_get_peer: Node vcs0.example.com and vcs0 share the same cluster node id '2852205578'!
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node vcs0 now has id: 2852205578
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: vcs0 is now (null)
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
- Feb 11 00:40:15 [2051] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs0[-1442761718]
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
- Feb 11 00:40:15 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
- Feb 11 00:40:15 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs0[-1442761718] - expected state is now down
- Feb 11 00:40:15 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
- Feb 11 00:40:16 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
- Feb 11 00:40:16 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-7: Waiting on 3 outstanding join acks
- Feb 11 00:40:16 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:40:36 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 2 (current: 11, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
- Feb 11 00:43:15 [2051] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: crm_timer_popped: Welcomed: 1, Integrated: 2
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
- Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: 1 cluster nodes failed to respond to the join offer.
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 7
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-7: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/69, version=0.11.29): OK (rc=0)
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/70, version=0.11.30): OK (rc=0)
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/71, version=0.11.31): OK (rc=0)
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_ack: join-7: Updating node state to member for vcsquorum
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_ack: join-7: Updating node state to member for vcs1
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/72, version=0.11.32): OK (rc=0)
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/74, version=0.11.34): OK (rc=0)
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:43:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.32 -> 0.11.33 from vcs0 not applied to 0.11.35: current "num_updates" is greater than required
- Feb 11 00:43:15 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.11.36
- Feb 11 00:43:15 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.12.1
- Feb 11 00:43:15 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs0.example.com" id="2852205578" />
- Feb 11 00:43:15 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2852205578" uname="vcs0" />
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/76, version=0.12.1): OK (rc=0)
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.12.1) : Non-status change
- Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/78, version=0.12.3): OK (rc=0)
- Feb 11 00:43:15 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:43:15 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:43:15 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:43:15 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-3.bz2
- Feb 11 00:43:15 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1360564995-47) derived from /var/lib/pacemaker/pengine/pe-input-3.bz2
- Feb 11 00:43:15 [2051] vcsquorum crmd: notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Complete
- Feb 11 00:43:15 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:43:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.33 -> 0.11.34 from vcs0 not applied to 0.12.5: current "epoch" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs0: ccde8dddbecc04ef5e0f9d36a1a27e9c
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_replace: Replacement 0.11.34 from vcs0 not applied to 0.12.6: current epoch is greater than the replacement
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_diff_notify: Update (client: crmd, call:15): 0.12.6 -> 0.11.34 (Update was older than existing configuration)
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.34 -> 0.12.1 from vcs0 not applied to 0.12.6: current "epoch" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.1 -> 0.12.2 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.2 -> 0.12.3 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.3 -> 0.12.4 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.4 -> 0.13.1 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
- Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.1 -> 0.13.2 from vcs0 not applied to 0.12.6: current "epoch" is less than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.2 -> 0.13.3 from vcs0 not applied to 0.12.6: current "epoch" is less than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.12.6): OK (rc=0)
- Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.3 -> 0.13.4 from vcs0 not applied to 0.12.7: current "epoch" is less than required
- Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:37 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.4 -> 0.13.5 from vcs0 not applied to 0.12.7: current "epoch" is less than required
- Feb 11 00:45:37 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:37 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.5 -> 0.13.6 from vcs0 not applied to 0.12.9: current "epoch" is less than required
- Feb 11 00:45:37 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs0 (state=S_IDLE)
- Feb 11 00:45:45 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:45:45 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.6 -> 0.13.7 from vcs0 not applied to 0.12.9: current "epoch" is less than required
- Feb 11 00:45:45 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:45 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:45:45 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.7 -> 0.13.8 from vcs0 not applied to 0.12.9: current "epoch" is less than required
- Feb 11 00:45:45 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Left[2.0] crmd.-1442761718
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [offline] (DC=true)
- Feb 11 00:45:45 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
- Feb 11 00:45:45 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.0] crmd.755053578
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.1] crmd.-1425984502
- Feb 11 00:45:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 00:45:45 [2051] vcsquorum crmd: warning: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
- Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:45:45 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:45:45 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-4.bz2
- Feb 11 00:45:45 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:45:45 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1360565145-48) derived from /var/lib/pacemaker/pengine/pe-input-4.bz2
- Feb 11 00:45:45 [2051] vcsquorum crmd: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
- Feb 11 00:45:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[2.0] stonith-ng.-1442761718
- Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.755053578
- Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.1] stonith-ng.-1425984502
- Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Left[2.0] cib.-1442761718
- Feb 11 00:45:46 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
- Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[2.0] cib.755053578
- Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[2.1] cib.-1425984502
- Feb 11 00:46:05 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
- Feb 11 00:46:05 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63868: quorum retained (2)
- Feb 11 00:46:05 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1442761718/vcs0.example.com was not seen in the previous transition
- Feb 11 00:46:05 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs0.example.com[2852205578] - state is now lost
- Feb 11 00:46:05 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now lost (was member)
- Feb 11 00:46:05 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:46:05 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/83, version=0.12.11): OK (rc=0)
- Feb 11 00:46:05 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:46:05 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63868) was formed.
- Feb 11 00:46:05 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs1 (state=S_IDLE)
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-shutdown, name=shutdown, value=1360565809, magic=NA, cib=0.12.13) : Transient attribute: update
- Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 00:56:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:56:49 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:56:49 [2050] vcsquorum pengine: notice: stage6: Scheduling Node vcs1 for shutdown
- Feb 11 00:56:49 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-5.bz2
- Feb 11 00:56:49 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1360565809-50) derived from /var/lib/pacemaker/pengine/pe-input-5.bz2
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: te_crm_command: Executing crm-event (7): do_shutdown on vcs1
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: crm_update_peer_expected: te_crm_command: Node vcs1[-1425984502] - expected state is now down
- Feb 11 00:56:49 [2051] vcsquorum crmd: notice: run_graph: Transition 5 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Complete
- Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Left[3.0] crmd.-1425984502
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [offline] (DC=true)
- Feb 11 00:56:49 [2051] vcsquorum crmd: notice: peer_update_callback: do_shutdown of vcs1 (op 7) is complete
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[3.0] crmd.755053578
- Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63868
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-8: Waiting on 1 outstanding join acks
- Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[3.0] stonith-ng.-1425984502
- Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
- Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[3.0] stonith-ng.755053578
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:56:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Left[3.0] cib.-1425984502
- Feb 11 00:56:49 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
- Feb 11 00:56:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[3.0] cib.755053578
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-8: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/88, version=0.12.14): OK (rc=0)
- Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/89, version=0.12.15): OK (rc=0)
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_ack: join-8: Updating node state to member for vcsquorum
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/90, version=0.12.16): OK (rc=0)
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:56:49 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:56:49 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/92, version=0.12.18): OK (rc=0)
- Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/94, version=0.12.20): OK (rc=0)
- Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:56:50 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:56:50 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-6.bz2
- Feb 11 00:56:50 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:56:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:56:50 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1360565810-56) derived from /var/lib/pacemaker/pengine/pe-input-6.bz2
- Feb 11 00:56:50 [2051] vcsquorum crmd: notice: run_graph: Transition 6 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-6.bz2): Complete
- Feb 11 00:56:50 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
- Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
- Feb 11 00:56:56 [2051] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63872: quorum lost (1)
- Feb 11 00:56:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1 was not seen in the previous transition
- Feb 11 00:56:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1[2868982794] - state is now lost
- Feb 11 00:56:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now lost (was member)
- Feb 11 00:56:56 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
- Feb 11 00:56:56 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/97, version=0.12.22): OK (rc=0)
- Feb 11 00:56:56 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63872) was formed.
- Feb 11 00:56:56 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:56:56 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:56:57 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63872: quorum still lost (1)
- Feb 11 00:56:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/98, version=0.12.23): OK (rc=0)
- Feb 11 00:56:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/100, version=0.12.25): OK (rc=0)
- Feb 11 00:57:02 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63876: quorum still lost (2)
- Feb 11 00:57:02 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1[2868982794] - state is now member
- Feb 11 00:57:02 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now member (was lost)
- Feb 11 00:57:02 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
- Feb 11 00:57:02 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/102, version=0.12.27): OK (rc=0)
- Feb 11 00:57:02 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:57:02 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63876) was formed.
- Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
- Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
- Feb 11 00:57:02 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:57:03 [2051] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63876: quorum acquired (2)
- Feb 11 00:57:03 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/104, version=0.12.29): OK (rc=0)
- Feb 11 00:57:03 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/105, version=0.12.30): OK (rc=0)
- Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[4.0] stonith-ng.-1425984502
- Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[4.0] stonith-ng.755053578
- Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[4.1] stonith-ng.-1425984502
- Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
- Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[4.0] cib.-1425984502
- Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[4.0] cib.755053578
- Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[4.1] cib.-1425984502
- Feb 11 00:57:10 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[4.0] crmd.-1425984502
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[4.0] crmd.755053578
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[4.1] crmd.-1425984502
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=true)
- Feb 11 00:57:11 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2868982794
- Feb 11 00:57:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63876
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-9: Waiting on 2 outstanding join acks
- Feb 11 00:57:11 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:57:11 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.0 -> 0.13.1 from vcs1 not applied to 0.12.32: current "epoch" is less than required
- Feb 11 00:57:11 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
- Feb 11 00:57:12 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
- Feb 11 00:57:12 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-10: Waiting on 2 outstanding join acks
- Feb 11 00:57:12 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs1[-1425984502] - expected state is now member
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-10: Syncing the CIB from vcs1 to the rest of the cluster
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 40b422e480d7c38434ee5f3cca92f438
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.12.32 with 0.13.1 from vcs1
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.12.32 -> 0.13.1 from vcs1
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.12.32
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.13.1
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1" id="2868982794" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" join="down" crm-debug-origin="peer_update_callback" expected="down" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="2868982794" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-2868982794" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-probe_complete" name="probe_complete" value="true" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-shutdown" name="shutdown" value="1360565809" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="2868982794" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="755053578" uname="vcsquorum" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="post_cache_update" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="755053578" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-755053578" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-755053578-probe_complete" name="probe_complete" value="true" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="755053578" >
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2852205578" uname="vcs0" in_ccm="false" crmd="offline" join="down" crm-debug-origin="post_cache_update" expected="down" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1.example.com" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=vcs1/vcs1/110, version=0.13.1): OK (rc=0)
- Feb 11 00:57:13 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_ack: join-10: Updating node state to member for vcsquorum
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_ack: join-10: Updating node state to member for vcs1
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.14.1
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/111, version=0.14.1): OK (rc=0)
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.15.1
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1.example.com" id="2868982794" />
- Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1" />
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/112, version=0.15.1): OK (rc=0)
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/transient_attributes (origin=vcs1/crmd/8, version=0.15.2): OK (rc=0)
- Feb 11 00:57:13 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.15.2): OK (rc=0)
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/113, version=0.15.3): OK (rc=0)
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/115, version=0.15.5): OK (rc=0)
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:57:13 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/117, version=0.15.7): OK (rc=0)
- Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/119, version=0.15.9): OK (rc=0)
- Feb 11 00:57:13 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:57:13 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:57:14 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:57:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 7: /var/lib/pacemaker/pengine/pe-input-7.bz2
- Feb 11 00:57:14 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:57:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:57:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1360565834-70) derived from /var/lib/pacemaker/pengine/pe-input-7.bz2
- Feb 11 00:57:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on vcs1 - no waiting
- Feb 11 00:57:14 [2051] vcsquorum crmd: notice: run_graph: Transition 7 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-7.bz2): Complete
- Feb 11 00:57:14 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:58:17 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
- Feb 11 00:58:17 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63880: quorum retained (3)
- Feb 11 00:58:17 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0[2852205578] - state is now member
- Feb 11 00:58:17 [2051] vcsquorum crmd: info: peer_update_callback: vcs0 is now member (was (null))
- Feb 11 00:58:17 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:58:17 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/122, version=0.15.14): OK (rc=0)
- Feb 11 00:58:17 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
- Feb 11 00:58:17 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63880) was formed.
- Feb 11 00:58:17 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[5.0] stonith-ng.-1442761718
- Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.0] stonith-ng.755053578
- Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.1] stonith-ng.-1442761718
- Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.2] stonith-ng.-1425984502
- Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[5.0] cib.-1442761718
- Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.0] cib.755053578
- Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.1] cib.-1442761718
- Feb 11 00:58:28 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.2] cib.-1425984502
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[5.0] crmd.-1442761718
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.0] crmd.755053578
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.1] crmd.-1442761718
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
- Feb 11 00:58:29 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.2] crmd.-1425984502
- Feb 11 00:58:29 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63880
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-11: Waiting on 3 outstanding join acks
- Feb 11 00:58:29 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:58:29 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=0.15.16): OK (rc=0)
- Feb 11 00:58:29 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.0.0 -> 0.0.1 from vcs0 not applied to 0.15.16: current "epoch" is greater than required
- Feb 11 00:58:30 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
- Feb 11 00:58:30 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-12: Waiting on 3 outstanding join acks
- Feb 11 00:58:30 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs0[-1442761718] - expected state is now member
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-12: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/127, version=0.15.16): OK (rc=0)
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcsquorum
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcs0
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcs1
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/128, version=0.15.17): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/129, version=0.15.18): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/130, version=0.15.19): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=vcs0/crmd/9, version=0.15.20): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/131, version=0.15.22): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/133, version=0.15.24): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/135, version=0.15.26): OK (rc=0)
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 00:58:31 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 00:58:31 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/137, version=0.15.28): OK (rc=0)
- Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/139, version=0.15.31): OK (rc=0)
- Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:58:31 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:58:31 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-input-8.bz2
- Feb 11 00:58:31 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1360565911-85) derived from /var/lib/pacemaker/pengine/pe-input-8.bz2
- Feb 11 00:58:31 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on vcs0 - no waiting
- Feb 11 00:58:31 [2051] vcsquorum crmd: notice: run_graph: Transition 8 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-8.bz2): Complete
- Feb 11 00:58:31 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 00:59:33 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.16.1) : Non-status change
- Feb 11 00:59:33 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.15.36
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.16.1
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="15" num_updates="36" />
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="nodes-755053578" >
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="nodes-755053578-standby" name="standby" value="on" />
- Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 00:59:33 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crm_attribute/5, version=0.16.1): OK (rc=0)
- Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Feb 11 00:59:33 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
- Feb 11 00:59:33 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 9: /var/lib/pacemaker/pengine/pe-input-9.bz2
- Feb 11 00:59:33 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Feb 11 00:59:33 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 00:59:33 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 9 (ref=pe_calc-dc-1360565973-87) derived from /var/lib/pacemaker/pengine/pe-input-9.bz2
- Feb 11 00:59:33 [2051] vcsquorum crmd: notice: run_graph: Transition 9 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-9.bz2): Complete
- Feb 11 00:59:33 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:00:45 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.17.1) : Non-status change
- Feb 11 01:00:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:00:45 [2045] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 0.17.1 from <null>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.17.1
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="16" num_updates="1" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="freeze" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcs0" type="external/webpowerswitch" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcs0-instance_attributes" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.100" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_port" name="wps_port" value="2" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_username" name="wps_username" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_password" name="wps_password" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcs0" />
- Feb 11 01:00:45 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcs1" type="external/webpowerswitch" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcs1-instance_attributes" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.200" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_port" name="wps_port" value="2" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_username" name="wps_username" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_password" name="wps_password" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcs1" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcsquorum" type="external/webpowerswitch" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcsquorum-instance_attributes" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.101" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_port" name="wps_port" value="2" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_username" name="wps_username" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_password" name="wps_password" value="xxx" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcsquorum" />
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <group id="g_vcs" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_fs_vcs" provider="heartbeat" type="Filesystem" >
- Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_fs_vcs-instance_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-device" name="device" value="/dev/drbd0" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-directory" name="directory" value="/mnt/storage" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-fstype" name="fstype" value="ext4" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-options" name="options" value="noatime" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-start-0" interval="0" name="start" timeout="60" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-stop-0" interval="0" name="stop" timeout="60" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-monitor-20" interval="20" name="monitor" timeout="40" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="lsb" id="p_daemon_svn" type="svn" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_daemon_svn-monitor-30s" interval="30s" name="monitor" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="lsb" id="p_daemon_git-daemon" type="git-daemon" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_daemon_git-daemon-monitor-30s" interval="30s" name="monitor" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_ip_vcs" provider="heartbeat" type="IPaddr2" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_ip_vcs-instance_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-ip" name="ip" value="192.168.1.22" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-cidr_netmask" name="cidr_netmask" value="16" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-nic" name="nic" value="eth1" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ip_vcs-monitor-30s" interval="30s" name="monitor" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </group>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <master id="ms_drbd_vcs" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="ms_drbd_vcs-meta_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-master-max" name="master-max" value="1" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-master-node-max" name="master-node-max" value="1" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-clone-max" name="clone-max" value="2" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-clone-node-max" name="clone-node-max" value="1" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-notify" name="notify" value="true" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_drbd_vcs" provider="linbit" type="drbd" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_drbd_vcs-instance_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_drbd_vcs-instance_attributes-drbd_resource" name="drbd_resource" value="vcs" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-start-0" interval="0" name="start" timeout="240" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-stop-0" interval="0" name="stop" timeout="100" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-monitor-10" interval="10" name="monitor" role="Master" timeout="90" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-monitor-20" interval="20" name="monitor" role="Slave" timeout="60" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </master>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <clone id="cl_ping" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="cl_ping-meta_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cl_ping-meta_attributes-interleave" name="interleave" value="true" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_ping" provider="pacemaker" type="ping" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_ping-instance_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-name" name="name" value="p_ping" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-host_list" name="host_list" value="192.168.0.128 192.168.0.129 192.168.0.33 192.168.0.1 192.168.0.127" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-dampen" name="dampen" value="25s" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-multiplier" name="multiplier" value="1000" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ping-start-0" interval="0" name="start" timeout="60" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ping-monitor-10s" interval="10s" name="monitor" timeout="60" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </clone>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <clone id="cl_sysadmin_notify" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_sysadmin_notify" provider="heartbeat" type="MailTo" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_sysadmin_notify-instance_attributes" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_sysadmin_notify-instance_attributes-email" name="email" value="sysadmin-alert@xes-inc.com" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_sysadmin_notify-instance_attributes-0" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_sysadmin_notify-instance_attributes-0-subject" name="subject" value="VCS Pacemaker Change" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-start-0" interval="0" name="start" timeout="30" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-stop-0" interval="0" name="stop" timeout="30" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-monitor-10" interval="10" name="monitor" timeout="30" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </clone>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_run_on_most_connected" rsc="g_vcs" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rule boolean-op="or" id="loc_run_on_most_connected-rule" score="-INFINITY" >
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression attribute="p_ping" id="loc_run_on_most_connected-expression" operation="not_defined" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression attribute="p_ping" id="loc_run_on_most_connected-expression-0" operation="lte" value="0" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcs0" node="vcs0" rsc="stonithvcs0" score="-INFINITY" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcs1" node="vcs1" rsc="stonithvcs1" score="-INFINITY" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcsquorum" node="vcsquorum" rsc="stonithvcsquorum" score="-INFINITY" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_colocation id="c_drbd_fs_services" rsc="g_vcs" score="INFINITY" with-rsc="ms_drbd_vcs" with-rsc-role="Master" />
- Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_order first="ms_drbd_vcs" first-action="promote" id="o_drbd_fs_services" score="INFINITY" then="g_vcs" then-action="start" />
- Feb 11 01:00:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/cibadmin/2, version=0.17.1): OK (rc=0)
- Feb 11 01:00:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/144, version=0.17.2): OK (rc=0)
- Feb 11 01:00:46 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 01:00:46 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 01:01:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 2 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:01:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 3 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:01:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 4 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:02:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 5 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:02:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 6 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:02:45 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:02:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:02:45 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/146, version=0.17.5): OK (rc=0)
- Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/147, version=0.17.6): OK (rc=0)
- Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/149, version=0.17.7): OK (rc=0)
- Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-13: Waiting on 3 outstanding join acks
- Feb 11 01:02:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 7 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 8 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:02:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/151, version=0.17.8): OK (rc=0)
- Feb 11 01:02:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-13
- Feb 11 01:02:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:03:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 9 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:03:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 10 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:03:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 11 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:04:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 12 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:04:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 13 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:04:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:04:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/154, version=0.17.9): OK (rc=0)
- Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/155, version=0.17.10): OK (rc=0)
- Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/157, version=0.17.11): OK (rc=0)
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-14: Waiting on 3 outstanding join acks
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 14 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 15 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:04:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/159, version=0.17.12): OK (rc=0)
- Feb 11 01:04:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-14
- Feb 11 01:04:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:05:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 16 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:05:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 17 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:05:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 18 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:06:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 19 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:06:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 20 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:06:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:06:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/162, version=0.17.13): OK (rc=0)
- Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/163, version=0.17.14): OK (rc=0)
- Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/165, version=0.17.15): OK (rc=0)
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-15: Waiting on 3 outstanding join acks
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 21 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 22 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/167, version=0.17.16): OK (rc=0)
- Feb 11 01:06:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:06:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-15
- Feb 11 01:06:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:07:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 23 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:07:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 24 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:07:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 25 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:08:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 26 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:08:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 27 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:08:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:08:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/170, version=0.17.17): OK (rc=0)
- Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/171, version=0.17.18): OK (rc=0)
- Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/173, version=0.17.19): OK (rc=0)
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-16: Waiting on 3 outstanding join acks
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 28 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 29 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/175, version=0.17.20): OK (rc=0)
- Feb 11 01:08:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-16
- Feb 11 01:08:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:08:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:09:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 30 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:09:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 31 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:09:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 32 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:10:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 33 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:10:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 34 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:10:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:10:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/178, version=0.17.21): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/179, version=0.17.22): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/181, version=0.17.23): OK (rc=0)
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-17: Waiting on 3 outstanding join acks
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/183, version=0.17.24): OK (rc=0)
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-17: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/186, version=0.17.24): OK (rc=0)
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcsquorum
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcs0
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcs1
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/187, version=0.17.25): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/188, version=0.17.30): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/189, version=0.17.31): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/190, version=0.17.32): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/192, version=0.17.34): OK (rc=0)
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:10:46 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/194, version=0.17.36): OK (rc=0)
- Feb 11 01:10:46 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 01:10:46 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/196, version=0.17.39): OK (rc=0)
- Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/198, version=0.17.41): OK (rc=0)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcs0 (vcs1)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcs1 (vcs0)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcsquorum (vcs0)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_drbd_vcs:0 (vcs1)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_drbd_vcs:1 (vcs0)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_ping:0 (vcs1)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_ping:1 (vcs0)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_sysadmin_notify:0 (vcs1)
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_sysadmin_notify:1 (vcs0)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:10:47 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 10: /var/lib/pacemaker/pengine/pe-input-10.bz2
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 10 (ref=pe_calc-dc-1360566647-146) derived from /var/lib/pacemaker/pengine/pe-input-10.bz2
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: monitor stonithvcs0_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcs0' not found (0 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcs0' to the rsc list (1 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 15: monitor stonithvcs0_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: monitor stonithvcs0_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor stonithvcs1_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcs1' not found (1 active resources)
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcs0 not found
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcs1' to the rsc list (2 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: monitor stonithvcs1_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 5: monitor stonithvcs1_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 28: monitor stonithvcsquorum_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcs1 not found
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcsquorum' not found (2 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcsquorum' to the rsc list (3 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor stonithvcsquorum_monitor_0 on vcs1
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcsquorum not found
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 6: monitor stonithvcsquorum_monitor_0 on vcs0
- Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: monitor p_fs_vcs_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_fs_vcs' not found (3 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_fs_vcs' to the rsc list (4 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: monitor p_fs_vcs_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 7: monitor p_fs_vcs_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 30: monitor p_daemon_svn_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_daemon_svn' not found (4 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_daemon_svn' to the rsc list (5 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 8: monitor p_daemon_svn_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 31: monitor p_daemon_git-daemon_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_daemon_git-daemon' not found (5 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_daemon_git-daemon' to the rsc list (6 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: monitor p_daemon_git-daemon_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 9: monitor p_daemon_git-daemon_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 32: monitor p_ip_vcs_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ip_vcs' not found (6 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_ip_vcs' to the rsc list (7 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_ip_vcs_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 10: monitor p_ip_vcs_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 33: monitor p_drbd_vcs:0_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_drbd_vcs' not found (7 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_drbd_vcs:0' not found (7 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_drbd_vcs' to the rsc list (8 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_drbd_vcs:0_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 11: monitor p_drbd_vcs:1_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 34: monitor p_ping:0_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ping' not found (8 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ping:0' not found (8 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_ping' to the rsc list (9 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ping:0_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 12: monitor p_ping:1_monitor_0 on vcs0
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_sysadmin_notify:0_monitor_0 on vcsquorum (local)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_sysadmin_notify' not found (9 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_sysadmin_notify:0' not found (9 active resources)
- Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_sysadmin_notify' to the rsc list (10 active resources)
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: monitor p_sysadmin_notify:0_monitor_0 on vcs1
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 13: monitor p_sysadmin_notify:1_monitor_0 on vcs0
- Filesystem[2687]: 2013/02/11_01:10:47 WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
- Filesystem[2687]: 2013/02/11_01:10:47 WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcs0_monitor_0 (call=5, rc=7, cib-update=201, confirmed=true) not running
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcs1_monitor_0 (call=9, rc=7, cib-update=202, confirmed=true) not running
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcsquorum_monitor_0 (call=13, rc=7, cib-update=203, confirmed=true) not running
- Feb 11 01:10:47 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_daemon_git-daemon_monitor_0 (call=25, rc=7, cib-update=204, confirmed=true) not running
- Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: Result: * git-daemon is not running
- Feb 11 01:10:48 [2051] vcsquorum crmd: info: services_os_action_execute: Managed MailTo_meta-data_0 process 2755 exited with rc=0
- Feb 11 01:10:48 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_sysadmin_notify_monitor_0 (call=44, rc=7, cib-update=205, confirmed=true) not running
- Feb 11 01:10:48 [2051] vcsquorum crmd: info: process_lrm_event: Result: stopped
- Feb 11 01:10:48 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_daemon_svn_monitor_0 (call=21, rc=7, cib-update=206, confirmed=true) not running
- Feb 11 01:10:48 [2051] vcsquorum crmd: info: process_lrm_event: Result: svnserve is not running.
- Feb 11 01:10:49 [2051] vcsquorum crmd: info: services_os_action_execute: Managed ping_meta-data_0 process 2812 exited with rc=0
- Feb 11 01:10:49 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_ping_monitor_0 (call=39, rc=7, cib-update=207, confirmed=true) not running
- Feb 11 01:10:50 [2051] vcsquorum crmd: info: services_os_action_execute: Managed Filesystem_meta-data_0 process 2818 exited with rc=0
- Feb 11 01:10:50 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_fs_vcs_monitor_0 (call=17, rc=7, cib-update=208, confirmed=true) not running
- Feb 11 01:10:51 [2051] vcsquorum crmd: info: services_os_action_execute: Managed IPaddr2_meta-data_0 process 2823 exited with rc=0
- Feb 11 01:10:51 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_ip_vcs_monitor_0 (call=29, rc=7, cib-update=209, confirmed=true) not running
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 2827 exited with rc=0
- Feb 11 01:10:52 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_drbd_vcs_monitor_0 (call=34, rc=7, cib-update=210, confirmed=true) not running
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 14: probe_complete probe_complete on vcs1 - no waiting
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on vcs0 - no waiting
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: probe_complete probe_complete on vcsquorum (local) - no waiting
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 36: start stonithvcs0_start_0 on vcs1
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 37: start stonithvcs1_start_0 on vcs0
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 38: start stonithvcsquorum_start_0 on vcs0
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 43: start p_drbd_vcs:0_start_0 on vcs1
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 45: start p_drbd_vcs:1_start_0 on vcs0
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 71: start p_ping:0_start_0 on vcs1
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 73: start p_ping:1_start_0 on vcs0
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 79: start p_sysadmin_notify:0_start_0 on vcs1
- Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 81: start p_sysadmin_notify:1_start_0 on vcs0
- Feb 11 01:10:53 [2051] vcsquorum crmd: warning: status_from_rc: Action 79 (p_sysadmin_notify:0_start_0) on vcs1 failed (target: 0 vs. rc: 1): Error
- Feb 11 01:10:53 [2051] vcsquorum crmd: warning: update_failcount: Updating failcount for p_sysadmin_notify on vcs1 after failed start: rc=1 (update=INFINITY, time=1360566653)
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_sysadmin_notify_last_failure_0, magic=0:1;79:10:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.76) : Event failed
- Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-fail-count-p_sysadmin_notify, name=fail-count-p_sysadmin_notify, value=INFINITY, magic=NA, cib=0.17.77) : Transient attribute: update
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-last-failure-p_sysadmin_notify, name=last-failure-p_sysadmin_notify, value=1360566653, magic=NA, cib=0.17.78) : Transient attribute: update
- Feb 11 01:10:53 [2051] vcsquorum crmd: warning: status_from_rc: Action 81 (p_sysadmin_notify:1_start_0) on vcs0 failed (target: 0 vs. rc: 1): Error
- Feb 11 01:10:53 [2051] vcsquorum crmd: warning: update_failcount: Updating failcount for p_sysadmin_notify on vcs0 after failed start: rc=1 (update=INFINITY, time=1360566653)
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_sysadmin_notify_last_failure_0, magic=0:1;81:10:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.79) : Event failed
- Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-fail-count-p_sysadmin_notify, name=fail-count-p_sysadmin_notify, value=INFINITY, magic=NA, cib=0.17.80) : Transient attribute: update
- Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-last-failure-p_sysadmin_notify, name=last-failure-p_sysadmin_notify, value=1360566653, magic=NA, cib=0.17.82) : Transient attribute: update
- Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:10:54 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=5, magic=NA, cib=0.17.84) : Transient attribute: update
- Feb 11 01:10:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:10:54 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-master-p_drbd_vcs, name=master-p_drbd_vcs, value=5, magic=NA, cib=0.17.86) : Transient attribute: update
- Feb 11 01:10:54 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs:0_post_notify_start_0 on vcs1
- Feb 11 01:10:54 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs:1_post_notify_start_0 on vcs0
- Feb 11 01:10:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:11:04 [2051] vcsquorum crmd: notice: run_graph: Transition 10 (Complete=55, Pending=0, Fired=0, Skipped=6, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-10.bz2): Stopped
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:1 on vcs0: unknown error (1)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
- Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Stop p_sysadmin_notify:0 (vcs1)
- Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Stop p_sysadmin_notify:1 (vcs0)
- Feb 11 01:11:04 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-11.bz2
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 11 (ref=pe_calc-dc-1360566664-191) derived from /var/lib/pacemaker/pengine/pe-input-11.bz2
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 51: monitor p_ping_monitor_10000 on vcs1
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 54: monitor p_ping_monitor_10000 on vcs0
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 84: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 86: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 1: stop p_sysadmin_notify_stop_0 on vcs1
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: stop p_sysadmin_notify_stop_0 on vcs0
- Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: promote p_drbd_vcs_promote_0 on vcs1
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 85: notify p_drbd_vcs_post_notify_promote_0 on vcs1
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 87: notify p_drbd_vcs_post_notify_promote_0 on vcs0
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_drbd_vcs_monitor_10000 on vcs1
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: monitor p_drbd_vcs_monitor_20000 on vcs0
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10, magic=NA, cib=0.17.97) : Transient attribute: update
- Feb 11 01:11:12 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10000, magic=NA, cib=0.17.98) : Transient attribute: update
- Feb 11 01:11:12 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:11:12 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:11:14 [2051] vcsquorum crmd: notice: run_graph: Transition 11 (Complete=21, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-11.bz2): Complete
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 12: /var/lib/pacemaker/pengine/pe-input-12.bz2
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 12 (ref=pe_calc-dc-1360566674-203) derived from /var/lib/pacemaker/pengine/pe-input-12.bz2
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: cancel p_drbd_vcs_cancel_10000 on vcs1
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: cancel p_drbd_vcs_cancel_20000 on vcs0
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 88: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 90: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;21:11:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.105) : Resource op removal
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;24:11:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.106) : Resource op removal
- Feb 11 01:11:14 [2051] vcsquorum crmd: notice: run_graph: Transition 12 (Complete=5, Pending=0, Fired=0, Skipped=12, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-12.bz2): Stopped
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
- Feb 11 01:11:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 13: /var/lib/pacemaker/pengine/pe-input-13.bz2
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 13 (ref=pe_calc-dc-1360566674-208) derived from /var/lib/pacemaker/pengine/pe-input-13.bz2
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 86: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 88: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: demote p_drbd_vcs_demote_0 on vcs1
- Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 87: notify p_drbd_vcs_post_notify_demote_0 on vcs1
- Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 89: notify p_drbd_vcs_post_notify_demote_0 on vcs0
- Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 82: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
- Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 84: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
- Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: promote p_drbd_vcs_promote_0 on vcs0
- Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 83: notify p_drbd_vcs_post_notify_promote_0 on vcs1
- Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 85: notify p_drbd_vcs_post_notify_promote_0 on vcs0
- Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_drbd_vcs_monitor_20000 on vcs1
- Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_drbd_vcs_monitor_10000 on vcs0
- Feb 11 01:11:24 [2051] vcsquorum crmd: notice: run_graph: Transition 13 (Complete=25, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-13.bz2): Complete
- Feb 11 01:11:24 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:11:28 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:11:28 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-p_ping, name=p_ping, value=5000, magic=NA, cib=0.17.111) : Transient attribute: update
- Feb 11 01:11:28 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:11:28 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2852205578-p_ping, name=p_ping, value=5000, magic=NA, cib=0.17.112) : Transient attribute: update
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs0)
- Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs0)
- Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs0)
- Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs0)
- Feb 11 01:11:28 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 14: /var/lib/pacemaker/pengine/pe-input-14.bz2
- Feb 11 01:11:28 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:11:28 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1360566688-221) derived from /var/lib/pacemaker/pengine/pe-input-14.bz2
- Feb 11 01:11:28 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs0
- Feb 11 01:11:29 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs0
- Feb 11 01:11:29 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs0
- Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs0
- Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs0
- Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs0
- Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs0
- Feb 11 01:11:31 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs0
- Feb 11 01:11:31 [2051] vcsquorum crmd: notice: run_graph: Transition 14 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-14.bz2): Complete
- Feb 11 01:11:31 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:12:22 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.17.121): OK (rc=0)
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.18.1) : Non-status change
- Feb 11 01:12:22 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.17.121
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.18.1
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="17" num_updates="121" />
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs0" type="string" />
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
- Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
- Feb 11 01:12:22 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.18.1): OK (rc=0)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_ip_vcs (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:1 (Master -> Slave vcs0)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 15: /var/lib/pacemaker/pengine/pe-input-15.bz2
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 15 (ref=pe_calc-dc-1360566742-230) derived from /var/lib/pacemaker/pengine/pe-input-15.bz2
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: stop p_ip_vcs_stop_0 on vcs0
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: cancel p_drbd_vcs_cancel_20000 on vcs1
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 8: cancel p_drbd_vcs_cancel_10000 on vcs0
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 102: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;21:13:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.18.2) : Resource op removal
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;25:13:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.18.3) : Resource op removal
- Feb 11 01:12:22 [2051] vcsquorum crmd: notice: run_graph: Transition 15 (Complete=7, Pending=0, Fired=0, Skipped=26, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-15.bz2): Stopped
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs0 -> vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:1 (Master -> Slave vcs0)
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:12:22 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 16: /var/lib/pacemaker/pengine/pe-input-16.bz2
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 16 (ref=pe_calc-dc-1360566742-236) derived from /var/lib/pacemaker/pengine/pe-input-16.bz2
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_daemon_git-daemon_stop_0 on vcs0
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 96: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 98: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_daemon_svn_stop_0 on vcs0
- Feb 11 01:12:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: stop p_fs_vcs_stop_0 on vcs0
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 37: demote p_drbd_vcs_demote_0 on vcs0
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 97: notify p_drbd_vcs_post_notify_demote_0 on vcs1
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs_post_notify_demote_0 on vcs0
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 92: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 94: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
- Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 34: promote p_drbd_vcs_promote_0 on vcs1
- Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 93: notify p_drbd_vcs_post_notify_promote_0 on vcs1
- Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 95: notify p_drbd_vcs_post_notify_promote_0 on vcs0
- Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_fs_vcs_start_0 on vcs1
- Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_drbd_vcs_monitor_10000 on vcs1
- Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 39: monitor p_drbd_vcs_monitor_20000 on vcs0
- Feb 11 01:12:43 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_fs_vcs_monitor_20000 on vcs1
- Feb 11 01:12:43 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: start p_daemon_svn_start_0 on vcs1
- Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_daemon_svn_monitor_30000 on vcs1
- Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: start p_daemon_git-daemon_start_0 on vcs1
- Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_daemon_git-daemon_monitor_30000 on vcs1
- Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: start p_ip_vcs_start_0 on vcs1
- Feb 11 01:12:45 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor p_ip_vcs_monitor_30000 on vcs1
- Feb 11 01:12:45 [2051] vcsquorum crmd: notice: run_graph: Transition 16 (Complete=40, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-16.bz2): Complete
- Feb 11 01:12:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:13:23 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10000, magic=NA, cib=0.18.20) : Transient attribute: update
- Feb 11 01:13:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:13:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:13:23 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 17: /var/lib/pacemaker/pengine/pe-input-17.bz2
- Feb 11 01:13:23 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:13:23 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 17 (ref=pe_calc-dc-1360566803-260) derived from /var/lib/pacemaker/pengine/pe-input-17.bz2
- Feb 11 01:13:23 [2051] vcsquorum crmd: notice: run_graph: Transition 17 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-17.bz2): Complete
- Feb 11 01:13:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:13:49 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.19.1) : Non-status change
- Feb 11 01:13:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.18.21 -> 0.19.1 from vcs1
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.18.21
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.19.1
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs0" type="string" />
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
- Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="19" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs1" update-client="crm_resource" cib-last-written="Mon Feb 11 01:12:22 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs1/cibadmin/2, version=0.19.1): OK (rc=0)
- Feb 11 01:13:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/220, version=0.19.2): OK (rc=0)
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:14:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.19.5): OK (rc=0)
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.19.5
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.20.1
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="19" num_updates="5" />
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
- Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
- Feb 11 01:14:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.20.1): OK (rc=0)
- Feb 11 01:14:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 35 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:14:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 36 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:14:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 37 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:15:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 38 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:15:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 39 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:15:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:15:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/222, version=0.20.2): OK (rc=0)
- Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/223, version=0.20.3): OK (rc=0)
- Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/225, version=0.20.4): OK (rc=0)
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-18: Waiting on 3 outstanding join acks
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 40 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 41 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:15:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/227, version=0.20.5): OK (rc=0)
- Feb 11 01:15:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-18
- Feb 11 01:15:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:16:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 42 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:16:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 43 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:16:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 44 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:17:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 45 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:17:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 46 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:17:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:17:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/230, version=0.20.6): OK (rc=0)
- Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/231, version=0.20.7): OK (rc=0)
- Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/233, version=0.20.8): OK (rc=0)
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-19: Waiting on 3 outstanding join acks
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 47 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 48 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/235, version=0.20.9): OK (rc=0)
- Feb 11 01:17:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:17:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-19
- Feb 11 01:17:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:18:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 49 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:18:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 50 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:18:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 51 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:19:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 52 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:19:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 53 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:19:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:19:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/238, version=0.20.10): OK (rc=0)
- Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/239, version=0.20.11): OK (rc=0)
- Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/241, version=0.20.12): OK (rc=0)
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-20: Waiting on 3 outstanding join acks
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 54 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 55 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:19:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/243, version=0.20.13): OK (rc=0)
- Feb 11 01:19:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-20
- Feb 11 01:19:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:20:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 56 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:20:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 57 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:20:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 58 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:21:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 59 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:21:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 60 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:21:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:21:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/246, version=0.20.14): OK (rc=0)
- Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/247, version=0.20.15): OK (rc=0)
- Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/249, version=0.20.16): OK (rc=0)
- Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-21: Waiting on 3 outstanding join acks
- Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 61 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Feb 11 01:21:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
- Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 62 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/251, version=0.20.17): OK (rc=0)
- Feb 11 01:21:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
- Feb 11 01:22:08 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.20.18): OK (rc=0)
- Feb 11 01:22:08 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.20.19): OK (rc=0)
- Feb 11 01:22:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 63 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:22:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 64 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.20.19 -> 0.21.1 from vcs0
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.20.19
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.21.1
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
- Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="21" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs1" update-client="crm_resource" cib-last-written="Mon Feb 11 01:14:09 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.21.1): OK (rc=0)
- Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/253, version=0.21.2): OK (rc=0)
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.21.4
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.22.1
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="21" num_updates="4" />
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="g_vcs-meta_attributes" >
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Stopped" />
- Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
- Feb 11 01:22:55 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.22.1): OK (rc=0)
- Feb 11 01:23:07 [2051] vcsquorum crmd: info: do_election_count_vote: Election 65 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:23:27 [2051] vcsquorum crmd: info: do_election_count_vote: Election 66 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:23:47 [2051] vcsquorum crmd: info: do_election_count_vote: Election 67 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:23:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:23:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:23:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/255, version=0.22.2): OK (rc=0)
- Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/256, version=0.22.3): OK (rc=0)
- Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/258, version=0.22.4): OK (rc=0)
- Feb 11 01:23:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-22: Waiting on 3 outstanding join acks
- Feb 11 01:23:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/260, version=0.22.5): OK (rc=0)
- Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-22: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/263, version=0.22.5): OK (rc=0)
- Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcsquorum
- Feb 11 01:23:50 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/264, version=0.22.6): OK (rc=0)
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/265, version=0.22.7): OK (rc=0)
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/266, version=0.22.8): OK (rc=0)
- Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/267, version=0.22.9): OK (rc=0)
- Feb 11 01:23:52 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcs0
- Feb 11 01:23:52 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 11 01:23:52 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/269, version=0.22.21): OK (rc=0)
- Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:23:54 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcs1
- Feb 11 01:23:54 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/271, version=0.22.34): OK (rc=0)
- Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:23:54 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:23:54 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/273, version=0.22.36): OK (rc=0)
- Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/275, version=0.22.38): OK (rc=0)
- Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:23:54 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 01:23:54 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs1)
- Feb 11 01:23:55 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 18: /var/lib/pacemaker/pengine/pe-input-18.bz2
- Feb 11 01:23:55 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:23:55 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 18 (ref=pe_calc-dc-1360567435-320) derived from /var/lib/pacemaker/pengine/pe-input-18.bz2
- Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_ip_vcs_stop_0 on vcs1
- Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: stop p_daemon_git-daemon_stop_0 on vcs1
- Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: stop p_daemon_svn_stop_0 on vcs1
- Feb 11 01:23:57 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_fs_vcs_stop_0 on vcs1
- Feb 11 01:23:57 [2051] vcsquorum crmd: notice: run_graph: Transition 18 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-18.bz2): Complete
- Feb 11 01:23:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:24:05 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.23.1) : Non-status change
- Feb 11 01:24:05 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:24:05 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.22.46
- Feb 11 01:24:05 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.23.1
- Feb 11 01:24:05 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair value="Stopped" id="g_vcs-meta_attributes-target-role" />
- Feb 11 01:24:05 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
- Feb 11 01:24:05 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.23.1): OK (rc=0)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
- Feb 11 01:24:05 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 19: /var/lib/pacemaker/pengine/pe-input-19.bz2
- Feb 11 01:24:05 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:24:05 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 19 (ref=pe_calc-dc-1360567445-325) derived from /var/lib/pacemaker/pengine/pe-input-19.bz2
- Feb 11 01:24:05 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs1
- Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs1
- Feb 11 01:24:07 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs1
- Feb 11 01:24:07 [2051] vcsquorum crmd: notice: run_graph: Transition 19 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-19.bz2): Complete
- Feb 11 01:24:07 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:24:34 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.24.1) : Non-status change
- Feb 11 01:24:34 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.23.9 -> 0.24.1 from vcs0
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.23.9
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.24.1
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- <meta_attributes id="g_vcs-meta_attributes" >
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- </meta_attributes>
- Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="24" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs0" update-client="cibadmin" cib-last-written="Mon Feb 11 01:24:05 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.24.1): OK (rc=0)
- Feb 11 01:24:34 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
- Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/280, version=0.24.2): OK (rc=0)
- Feb 11 01:24:34 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 01:24:34 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:24:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs0/crm_resource/3, version=0.24.5): OK (rc=0)
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.24.5
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.25.1
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="24" num_updates="5" />
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
- Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
- Feb 11 01:24:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs0/crm_resource/4, version=0.25.1): OK (rc=0)
- Feb 11 01:24:54 [2051] vcsquorum crmd: info: do_election_count_vote: Election 68 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:25:14 [2051] vcsquorum crmd: info: do_election_count_vote: Election 69 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:25:34 [2051] vcsquorum crmd: info: do_election_count_vote: Election 70 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 0.26.1 from vcs0
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.26.1
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
- Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="26" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs0" update-client="crm_resource" cib-last-written="Mon Feb 11 01:24:46 2013" have-quorum="1" dc-uuid="755053578" />
- Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.26.1): OK (rc=0)
- Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/282, version=0.26.2): OK (rc=0)
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.26.4
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.27.1
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="26" num_updates="4" />
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="g_vcs-meta_attributes" >
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Stopped" />
- Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
- Feb 11 01:25:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.27.1): OK (rc=0)
- Feb 11 01:25:56 [2051] vcsquorum crmd: info: do_election_count_vote: Election 71 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:26:16 [2051] vcsquorum crmd: info: do_election_count_vote: Election 72 (owner: 2868982794) pass: vote from vcs1 (Uptime)
- Feb 11 01:26:34 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
- Feb 11 01:26:34 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
- Feb 11 01:26:34 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
- Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/284, version=0.27.2): OK (rc=0)
- Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/285, version=0.27.3): OK (rc=0)
- Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/287, version=0.27.4): OK (rc=0)
- Feb 11 01:26:34 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-23: Waiting on 3 outstanding join acks
- Feb 11 01:26:34 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/289, version=0.27.5): OK (rc=0)
- Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-23: Syncing the CIB from vcsquorum to the rest of the cluster
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/292, version=0.27.5): OK (rc=0)
- Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcsquorum
- Feb 11 01:26:35 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/293, version=0.27.6): OK (rc=0)
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/294, version=0.27.7): OK (rc=0)
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/295, version=0.27.8): OK (rc=0)
- Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/296, version=0.27.9): OK (rc=0)
- Feb 11 01:26:37 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcs0
- Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:26:37 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
- Feb 11 01:26:37 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/298, version=0.27.21): OK (rc=0)
- Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:26:39 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcs1
- Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
- Feb 11 01:26:39 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
- Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/300, version=0.27.34): OK (rc=0)
- Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:26:39 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Feb 11 01:26:39 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
- Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
- Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/302, version=0.27.36): OK (rc=0)
- Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/304, version=0.27.38): OK (rc=0)
- Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
- Feb 11 01:26:39 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Feb 11 01:26:39 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs1)
- Feb 11 01:26:40 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 20: /var/lib/pacemaker/pengine/pe-input-20.bz2
- Feb 11 01:26:40 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:26:40 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 20 (ref=pe_calc-dc-1360567600-349) derived from /var/lib/pacemaker/pengine/pe-input-20.bz2
- Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_ip_vcs_stop_0 on vcs1
- Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: stop p_daemon_git-daemon_stop_0 on vcs1
- Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: stop p_daemon_svn_stop_0 on vcs1
- Feb 11 01:26:42 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_fs_vcs_stop_0 on vcs1
- Feb 11 01:26:42 [2051] vcsquorum crmd: notice: run_graph: Transition 20 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-20.bz2): Complete
- Feb 11 01:26:42 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:27:00 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.28.1) : Non-status change
- Feb 11 01:27:00 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:27:00 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.27.46
- Feb 11 01:27:00 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.28.1
- Feb 11 01:27:00 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair value="Stopped" id="g_vcs-meta_attributes-target-role" />
- Feb 11 01:27:00 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
- Feb 11 01:27:00 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.28.1): OK (rc=0)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
- Feb 11 01:27:00 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 21: /var/lib/pacemaker/pengine/pe-input-21.bz2
- Feb 11 01:27:00 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:27:00 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 21 (ref=pe_calc-dc-1360567620-354) derived from /var/lib/pacemaker/pengine/pe-input-21.bz2
- Feb 11 01:27:00 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs1
- Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs1
- Feb 11 01:27:03 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs1
- Feb 11 01:27:03 [2051] vcsquorum crmd: notice: run_graph: Transition 21 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-21.bz2): Complete
- Feb 11 01:27:03 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:27:10 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs0/crm_resource/3, version=0.28.10): OK (rc=0)
- Feb 11 01:27:10 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.29.1) : Non-status change
- Feb 11 01:27:10 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.28.10
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.29.1
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="28" num_updates="10" />
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
- Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
- Feb 11 01:27:10 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs0/crm_resource/4, version=0.29.1): OK (rc=0)
- Feb 11 01:27:10 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_ip_vcs (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 22: /var/lib/pacemaker/pengine/pe-input-22.bz2
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 22 (ref=pe_calc-dc-1360567630-363) derived from /var/lib/pacemaker/pengine/pe-input-22.bz2
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: stop p_ip_vcs_stop_0 on vcs1
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 1: cancel p_drbd_vcs_cancel_10000 on vcs1
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 7: cancel p_drbd_vcs_cancel_20000 on vcs0
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 102: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;39:16:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.29.2) : Resource op removal
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;35:16:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.29.3) : Resource op removal
- Feb 11 01:27:11 [2051] vcsquorum crmd: notice: run_graph: Transition 22 (Complete=7, Pending=0, Fired=0, Skipped=26, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-22.bz2): Stopped
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs1 -> vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs0)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Feb 11 01:27:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 23: /var/lib/pacemaker/pengine/pe-input-23.bz2
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 23 (ref=pe_calc-dc-1360567631-369) derived from /var/lib/pacemaker/pengine/pe-input-23.bz2
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_daemon_git-daemon_stop_0 on vcs1
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 96: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 98: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
- Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_daemon_svn_stop_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: stop p_fs_vcs_stop_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 33: demote p_drbd_vcs_demote_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 97: notify p_drbd_vcs_post_notify_demote_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs_post_notify_demote_0 on vcs0
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 92: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 94: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 38: promote p_drbd_vcs_promote_0 on vcs0
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 93: notify p_drbd_vcs_post_notify_promote_0 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 95: notify p_drbd_vcs_post_notify_promote_0 on vcs0
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_fs_vcs_start_0 on vcs0
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_drbd_vcs_monitor_20000 on vcs1
- Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 39: monitor p_drbd_vcs_monitor_10000 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_fs_vcs_monitor_20000 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: start p_daemon_svn_start_0 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_daemon_svn_monitor_30000 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: start p_daemon_git-daemon_start_0 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_daemon_git-daemon_monitor_30000 on vcs0
- Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: start p_ip_vcs_start_0 on vcs0
- Feb 11 01:27:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor p_ip_vcs_monitor_30000 on vcs0
- Feb 11 01:27:16 [2051] vcsquorum crmd: notice: run_graph: Transition 23 (Complete=40, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-23.bz2): Complete
- Feb 11 01:27:16 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement