Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- This is how upstart does it:
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_shutdown: Requesting shutdown
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_shutdown_req: Sending shutdown request to DC: lucidcluster1
- Nov 23 17:52:47 lucidcluster1 pengine: [17177]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: attrd_shutdown: Exiting
- Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: main: Exiting...
- Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: attrd_cib_connection_destroy: Connection to the CIB terminated...
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: info: lrmd is shutting down
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource clvm:0 is left in RUNNING status.(last op monitor finished with rc 0.)
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 stonith-ng: [17173]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client attrd (conn=0x97dca0, async-conn=0x97dca0) left
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: pe_msg_dispatch: Received HUP from pengine:[17177]
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource unmbd is left in RUNNING status.(last op monitor finished with rc 0.)
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting 17178/cib_callback...
- Nov 23 17:52:47 lucidcluster1 stonith-ng: [17173]: info: stonith_shutdown: Terminating with 2 clients
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=17177, uuid=f19e5a04-130d-4a93-9494-38251a1cbdf3)
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource dlm:0 is left in RUNNING status.(last op monitor finished with rc 0.)
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting crmd/cib_rw...
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 46 to 47
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource uvsftpd is left in RUNNING status.(last op monitor finished with rc 0.)
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting cluster-dlm/cib_rw...
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 47 to pending delivery queue
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: crm_shutdown: Escalating the shutdown
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource usmbd is left in RUNNING status.(last op monitor finished with rc 0.)
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting <null>/cib_callback...
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [SERV ] Unloading all Corosync service engines.
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_log: FSA: Input I_ERROR from crm_shutdown() received in state S_POLICY_ENGINE
- Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: debug: [lrmd] stopped
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_shutdown: Disconnected 6 clients
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: Shuting down Pacemaker
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_SHUTDOWN origin=crm_shutdown ]
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_process_disconnect: All clients disconnected...
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "crmd"
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_recover: Action A_RECOVER (0000000001000000) not supported
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to crmd: [17178]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: WARN: do_election_vote: Not voting in election, we're in state S_RECOVERY
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_ha_connection_destroy: Exiting...
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 47
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_dc_release: DC role released
- Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: main: Done
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client stonith-ng (conn=0x979940, async-conn=0x979940) left
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: send_ipc_message: IPC Channel to 17174 is not connected
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client cib (conn=0x982000, async-conn=0x982000) left
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_te_control: Transitioner is now inactive
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_shutdown: Disconnecting STONITH...
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: lrm_get_all_rscs(621): failed to receive a reply message of getall.
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_lrm_control: Disconnected from the LRM
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_ha_control: Disconnected from OpenAIS
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_cib_control: Disconnecting CIB
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: send_ipc_message: IPC Channel to 17174 is not connected
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: cib_native_perform_op: Sending message to CIB service FAILED
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_exit: Could not recover from internal error
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
- Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_exit: [crmd] stopped (2)
- Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client crmd (conn=0x986360, async-conn=0x986360) left
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CKPT ] checkpoint exit conn 0x98ea20
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x992d80
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x99f7a0
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 47 to 48
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 48 to pending delivery queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: crmd confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "pengine"
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to pengine: [17177]
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: pengine confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "attrd"
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to attrd: [17176]
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: attrd confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "lrmd"
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to lrmd: [17175]
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: lrmd confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: cib confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: stonith-ng confirmed stopped
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: send_cluster_id: Born-on set to: 532 (age)
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: send_cluster_id: Local update: id=1115334848, born=532, seq=532
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] info: update_member: 0x965ac0 Node 1115334848 ((null)) born on: 532
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] info: update_member: Node lucidcluster1 now has process list: 00000000000000000000000000000002 (2)
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: Shutdown complete
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 48 to 49
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 49 to pending delivery queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: lucidcluster1 (1.1.2)
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: Pacemaker Cluster Manager 1.1.2
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 48
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 49
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync configuration service
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x9970e0
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 49 to 4a
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 4a to pending delivery queue
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 4a
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync profile loading service
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [QUORUM] lib_exit_fn: conn=0x99b440
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] sending join/leave message
- Nov 23 17:52:48 lucidcluster1 corosync[17168]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
- And this is 'pkill -TERM corosync':
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [SERV ] Unloading all Corosync service engines.
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: Shuting down Pacemaker
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "crmd"
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to crmd: [802]
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: crm_shutdown: Requesting shutdown
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_shutdown_req: Sending shutdown request to DC: lucidcluster1
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: handle_shutdown_request: Creating shutdown request for lucidcluster1 (state=S_POLICY_ENGINE)
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 42 to 43
- Nov 23 18:03:32 lucidcluster1 attrd: [800]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1290535412)
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 43 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 attrd: [800]: info: attrd_perform_update: Sent update 11: shutdown=1290535412
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=lucidcluster1, magic=NA, cib=0.244.17) : Transient attribute: update
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 43
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_pe_invoke: Query 44: Requesting the current CIB: S_POLICY_ENGINE
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_pe_invoke_callback: Invoking the PE: query=44, ref=pe_calc-dc-1290535412-26, seq=540, quorate=0
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_config: Startup probes: enabled
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 43 to 45
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 44 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_domains: Unpacking domains
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 45 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: determine_online_status: Node lucidcluster1 is shutting down
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 45
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_rsc_op: Operation usmbd_monitor_0 found resource usmbd active on lucidcluster1
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_rsc_op: Operation unmbd_monitor_0 found resource unmbd active on lucidcluster1
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: ERROR: unpack_operation: Specifying on_fail=fence and stonith-enabled=false makes no sense
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 45 to 47
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: ERROR: unpack_operation: Specifying on_fail=fence and stonith-enabled=false makes no sense
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 46 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: uvsftpd#011(upstart:vsftpd):#011Started lucidcluster1
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 47 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: group_print: Resource Group: samba
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 47
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: usmbd#011(upstart:smbd):#011Started lucidcluster1
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: unmbd#011(upstart:nmbd):#011Started lucidcluster1
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 47 to 48
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: clone_print: Clone Set: dlm-clone [dlm]
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 48 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Started: [ lucidcluster1 ]
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 48
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Stopped: [ dlm:1 dlm:2 ]
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: clone_print: Clone Set: clvm-clone [clvm]
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Started: [ lucidcluster1 ]
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Stopped: [ clvm:1 clvm:2 ]
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource uvsftpd cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: rsc_merge_weights: usmbd: Rolling back scores from unmbd
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource usmbd cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource unmbd cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: rsc_merge_weights: dlm-clone: Rolling back scores from clvm-clone
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:0 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:1 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:2 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:0 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:1 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:2 cannot run anywhere
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: stage6: Scheduling Node lucidcluster1 for shutdown
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource uvsftpd#011(lucidcluster1)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource usmbd#011(lucidcluster1)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource unmbd#011(lucidcluster1)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource dlm:0#011(lucidcluster1)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource dlm:1#011(Stopped)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource dlm:2#011(Stopped)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource clvm:0#011(lucidcluster1)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource clvm:1#011(Stopped)
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource clvm:2#011(Stopped)
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: unpack_graph: Unpacked transition 4: 13 actions in 13 synapses
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1290535412-26) derived from /var/lib/pengine/pe-input-25734.bz2
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 9: stop uvsftpd_stop_0 on lucidcluster1 (local)
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[11] on upstart::vsftpd::uvsftpd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 11 cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=9:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=uvsftpd_stop_0 )
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[15] on upstart::vsftpd::uvsftpd for client 802, its parameters: to the operation list.
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:uvsftpd:15: stop
- Nov 23 18:03:32 lucidcluster1 lrmd: [1517]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 11: stop unmbd_stop_0 on lucidcluster1 (local)
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[9] on upstart::nmbd::unmbd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 9 cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=11:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=unmbd_stop_0 )
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[16] on upstart::nmbd::unmbd for client 802, its parameters: to the operation list.
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:unmbd:16: stop
- Nov 23 18:03:32 lucidcluster1 lrmd: [1518]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 24 fired and confirmed
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation uvsftpd_monitor_20000 (call=11, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation unmbd_monitor_20000 (call=9, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 21: stop clvm:0_stop_0 on lucidcluster1 (local)
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[14] on ocf::clvmd::clvm:0 for client 802, its parameters: CRM_meta_clone=[0] daemon_timeout=[20] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[3] CRM_meta_notify=[false] crm_feature_set=[3.0.2] CRM_meta_globally_unique=[false] CRM_meta_on_fail=[fence] CRM_meta_name=[monitor] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 14 cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=21:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=clvm:0_stop_0 )
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[17] on ocf::clvmd::clvm:0 for client 802, its parameters: to the operation list.
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:clvm:0:17: stop
- Nov 23 18:03:32 lucidcluster1 lrmd: [1519]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation clvm:0_monitor_10000 (call=14, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 23 18:03:32 lucidcluster1 init: vsftpd main process (1184) killed by TERM signal
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed unmbd:stop process 1518 exited with return code 0.
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation unmbd_stop_0 (call=16, rc=0, cib-update=45, confirmed=true) ok
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action unmbd_stop_0 (11) confirmed on lucidcluster1 (rc=0)
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 10: stop usmbd_stop_0 on lucidcluster1 (local)
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 48 to 4a
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 49 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4a to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4a
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[8] on upstart::smbd::usmbd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 8 cancelled
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=10:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=usmbd_stop_0 )
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[18] on upstart::smbd::usmbd for client 802, its parameters: to the operation list.
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:usmbd:18: stop
- Nov 23 18:03:32 lucidcluster1 lrmd: [1521]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation usmbd_monitor_20000 (call=8, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed uvsftpd:stop process 1517 exited with return code 0.
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation uvsftpd_stop_0 (call=15, rc=0, cib-update=46, confirmed=true) ok
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action uvsftpd_stop_0 (9) confirmed on lucidcluster1 (rc=0)
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4a to 4c
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4b to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4c to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4c
- Nov 23 18:03:32 lucidcluster1 clvmd[1519]: INFO: Stopping clvm:0
- Nov 23 18:03:32 lucidcluster1 clvmd[1519]: INFO: Stopping clvmd
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got leave request on 0x1f96e20
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4c to 4d
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4d to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4d
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f96e20
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f96e20
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f8e760
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f8e760
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4d to 4e
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4e to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4e
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [QUORUM] lib_exit_fn: conn=0x1f92ac0
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: RA output: (usmbd:stop:stderr) process 1521:
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: RA output: (usmbd:stop:stderr) The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.#012Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
- Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed usmbd:stop process 1521 exited with return code 0.
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation usmbd_stop_0 (call=18, rc=0, cib-update=47, confirmed=true) ok
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action usmbd_stop_0 (10) confirmed on lucidcluster1 (rc=0)
- Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4e to 50
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4f to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 50 to pending delivery queue
- Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 50
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: process_pe_message: Transition 4: PEngine Input stored in: /var/lib/pengine/pe-input-25734.bz2
- Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: Managed clvm:0:stop process 1519 exited with return code 0.
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation clvm:0_stop_0 (call=17, rc=0, cib-update=48, confirmed=true) ok
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: match_graph_event: Action clvm:0_stop_0 (21) confirmed on lucidcluster1 (rc=0)
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 19 fired and confirmed
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 16: stop dlm:0_stop_0 on lucidcluster1 (local)
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering 50 to 52
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 51 to pending delivery queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 52 to pending delivery queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 52
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[12] on ocf::controld::dlm:0 for client 802, its parameters: CRM_meta_clone=[0] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[3] CRM_meta_notify=[false] crm_feature_set=[3.0.2] CRM_meta_globally_unique=[false] args=[-q 0] CRM_meta_on_fail=[fence] CRM_meta_name=[monitor] CRM_meta_interval=[20000] CRM_meta_timeout=[20000] cancelled
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 12 cancelled
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=16:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=dlm:0_stop_0 )
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[19] on ocf::controld::dlm:0 for client 802, its parameters: to the operation list.
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: rsc:dlm:0:19: stop
- Nov 23 18:03:33 lucidcluster1 lrmd: [1537]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
- Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation dlm:0_monitor_20000 (call=12, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] got leave request on 0x1f8a400
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering 52 to 53
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 53 to pending delivery queue
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 53
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f8a400
- Nov 23 18:03:33 lucidcluster1 dlm_controld.pcmk: [1198]: notice: terminate_ais_connection: Disconnecting from AIS
- Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: RA output: (dlm:0:stop:stderr) dlm_controld.pcmk: no process found
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [CKPT ] checkpoint exit conn 0x1f860a0
- Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f8a400
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: info: Managed dlm:0:stop process 1537 exited with return code 0.
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation dlm:0_stop_0 (call=19, rc=0, cib-update=49, confirmed=true) ok
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: match_graph_event: Action dlm:0_stop_0 (16) confirmed on lucidcluster1 (rc=0)
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 20 fired and confirmed
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 6 fired and confirmed
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_crm_command: Executing crm-event (28): do_shutdown on lucidcluster1
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_crm_command: crm-event (28) is a local shutdown
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: run_graph: ====================================================
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: notice: run_graph: Transition 4 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-25734.bz2): Complete
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_graph_trigger: Transition 4 is now complete
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_STOPPING [ input=I_STOP cause=C_FSA_INTERNAL origin=notify_crmd ]
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_dc_release: DC role released
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: pe_connection_destroy: Connection to the Policy Engine released
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_te_control: Transitioner is now inactive
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering 53 to 55
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_shutdown: Disconnecting STONITH...
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 54 to pending delivery queue
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 55 to pending delivery queue
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc clvm:0 is LRM_RSC_IDLE
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 55
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc unmbd is LRM_RSC_IDLE
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc dlm:0 is LRM_RSC_IDLE
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc uvsftpd is LRM_RSC_IDLE
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc usmbd is LRM_RSC_IDLE
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_lrm_control: Disconnected from the LRM
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_receive_cmd: the IPC to client [pid:802] disconnected.
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_ha_control: Disconnected from OpenAIS
- Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: unregister_client: client crmd [pid:802] is unregistered
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_cib_control: Disconnecting CIB
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
- Nov 23 18:03:34 lucidcluster1 cib: [798]: info: cib_process_readwrite: We are now in R/O mode
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
- Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: send_ipc_message: IPC Channel to 802 is not connected
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_dc_release ]
- Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: send_via_callback_channel: Delivery of reply to client 802/3b536b63-580c-4934-a18b-66c21c31d557 failed
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: free_mem: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
- Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed
- Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_exit: [crmd] stopped (0)
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client crmd (conn=0x1f7d9e0, async-conn=0x1f7d9e0) left
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: crmd confirmed stopped
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "pengine"
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to pengine: [801]
- Nov 23 18:03:34 lucidcluster1 pengine: [801]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: pengine confirmed stopped
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "attrd"
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to attrd: [800]
- Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: attrd_shutdown: Exiting
- Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: main: Exiting...
- Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: attrd_cib_connection_destroy: Connection to the CIB terminated...
- Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client attrd (conn=0x1f6b860, async-conn=0x1f6b860) left
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: attrd confirmed stopped
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "lrmd"
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to lrmd: [799]
- Nov 23 18:03:35 lucidcluster1 lrmd: [799]: info: lrmd is shutting down
- Nov 23 18:03:35 lucidcluster1 lrmd: [799]: debug: [lrmd] stopped
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: lrmd confirmed stopped
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "cib"
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to cib: [798]
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_shutdown: Disconnected 0 clients
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_process_disconnect: All clients disconnected...
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_ha_connection_destroy: Exiting...
- Nov 23 18:03:35 lucidcluster1 cib: [798]: info: main: Done
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client cib (conn=0x1f79680, async-conn=0x1f79680) left
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: cib confirmed stopped
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "stonith-ng"
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to stonith-ng: [797]
- Nov 23 18:03:35 lucidcluster1 stonith-ng: [797]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 23 18:03:35 lucidcluster1 stonith-ng: [797]: info: stonith_shutdown: Terminating with 0 clients
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client stonith-ng (conn=0x1f67340, async-conn=0x1f67340) left
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: stonith-ng confirmed stopped
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: send_cluster_id: Born-on set to: 540 (age)
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: send_cluster_id: Local update: id=1115334848, born=540, seq=540
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: update_member: 0x1f5d8c0 Node 1115334848 ((null)) born on: 540
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: update_member: Node lucidcluster1 now has process list: 00000000000000000000000000000002 (2)
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: Shutdown complete
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] Delivering 55 to 56
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 56 to pending delivery queue
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: lucidcluster1 (1.1.2)
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: Pacemaker Cluster Manager 1.1.2
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 56
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync configuration service
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync profile loading service
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] sending join/leave message
- Nov 23 18:03:35 lucidcluster1 corosync[788]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement