Advertisement
Guest User

Untitled

a guest
Nov 23rd, 2010
201
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 45.02 KB | None | 0 0
  1. This is how upstart does it:
  2.  
  3. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  4. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_shutdown: Requesting shutdown
  5. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
  6. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
  7. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_shutdown_req: Sending shutdown request to DC: lucidcluster1
  8. Nov 23 17:52:47 lucidcluster1 pengine: [17177]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  9. Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  10. Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: attrd_shutdown: Exiting
  11. Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: main: Exiting...
  12. Nov 23 17:52:47 lucidcluster1 attrd: [17176]: info: attrd_cib_connection_destroy: Connection to the CIB terminated...
  13. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: info: lrmd is shutting down
  14. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource clvm:0 is left in RUNNING status.(last op monitor finished with rc 0.)
  15. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  16. Nov 23 17:52:47 lucidcluster1 stonith-ng: [17173]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  17. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client attrd (conn=0x97dca0, async-conn=0x97dca0) left
  18. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: pe_msg_dispatch: Received HUP from pengine:[17177]
  19. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource unmbd is left in RUNNING status.(last op monitor finished with rc 0.)
  20. Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting 17178/cib_callback...
  21. Nov 23 17:52:47 lucidcluster1 stonith-ng: [17173]: info: stonith_shutdown: Terminating with 2 clients
  22. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
  23. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=17177, uuid=f19e5a04-130d-4a93-9494-38251a1cbdf3)
  24. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource dlm:0 is left in RUNNING status.(last op monitor finished with rc 0.)
  25. Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting crmd/cib_rw...
  26. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 46 to 47
  27. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  28. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource uvsftpd is left in RUNNING status.(last op monitor finished with rc 0.)
  29. Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting cluster-dlm/cib_rw...
  30. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 47 to pending delivery queue
  31. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: crm_shutdown: Escalating the shutdown
  32. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: WARN: resource usmbd is left in RUNNING status.(last op monitor finished with rc 0.)
  33. Nov 23 17:52:47 lucidcluster1 cib: [17174]: WARN: disconnect_cib_client: Disconnecting <null>/cib_callback...
  34. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [SERV ] Unloading all Corosync service engines.
  35. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_log: FSA: Input I_ERROR from crm_shutdown() received in state S_POLICY_ENGINE
  36. Nov 23 17:52:47 lucidcluster1 lrmd: [17175]: debug: [lrmd] stopped
  37. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_shutdown: Disconnected 6 clients
  38. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: Shuting down Pacemaker
  39. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_SHUTDOWN origin=crm_shutdown ]
  40. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_process_disconnect: All clients disconnected...
  41. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "crmd"
  42. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_recover: Action A_RECOVER (0000000001000000) not supported
  43. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
  44. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to crmd: [17178]
  45. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: WARN: do_election_vote: Not voting in election, we're in state S_RECOVERY
  46. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: cib_ha_connection_destroy: Exiting...
  47. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 47
  48. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_dc_release: DC role released
  49. Nov 23 17:52:47 lucidcluster1 cib: [17174]: info: main: Done
  50. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client stonith-ng (conn=0x979940, async-conn=0x979940) left
  51. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: send_ipc_message: IPC Channel to 17174 is not connected
  52. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client cib (conn=0x982000, async-conn=0x982000) left
  53. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_te_control: Transitioner is now inactive
  54. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
  55. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
  56. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_shutdown: Disconnecting STONITH...
  57. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: lrm_get_all_rscs(621): failed to receive a reply message of getall.
  58. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_lrm_control: Disconnected from the LRM
  59. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_ha_control: Disconnected from OpenAIS
  60. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_cib_control: Disconnecting CIB
  61. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: send_ipc_message: IPC Channel to 17174 is not connected
  62. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: cib_native_perform_op: Sending message to CIB service FAILED
  63. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
  64. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: ERROR: do_exit: Could not recover from internal error
  65. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
  66. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
  67. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
  68. Nov 23 17:52:47 lucidcluster1 crmd: [17178]: info: do_exit: [crmd] stopped (2)
  69. Nov 23 17:52:47 lucidcluster1 corosync[17168]: [pcmk ] info: pcmk_ipc_exit: Client crmd (conn=0x986360, async-conn=0x986360) left
  70. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CKPT ] checkpoint exit conn 0x98ea20
  71. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x992d80
  72. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x99f7a0
  73. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
  74. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 47 to 48
  75. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 48 to pending delivery queue
  76. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
  77. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
  78. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: crmd confirmed stopped
  79. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "pengine"
  80. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to pengine: [17177]
  81. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: pengine confirmed stopped
  82. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "attrd"
  83. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to attrd: [17176]
  84. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: attrd confirmed stopped
  85. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: stop_child: Stopping CRM child "lrmd"
  86. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: stop_child: Sent -15 to lrmd: [17175]
  87. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: lrmd confirmed stopped
  88. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: cib confirmed stopped
  89. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: stonith-ng confirmed stopped
  90. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: send_cluster_id: Born-on set to: 532 (age)
  91. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: send_cluster_id: Local update: id=1115334848, born=532, seq=532
  92. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] info: update_member: 0x965ac0 Node 1115334848 ((null)) born on: 532
  93. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] info: update_member: Node lucidcluster1 now has process list: 00000000000000000000000000000002 (2)
  94. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] notice: pcmk_shutdown: Shutdown complete
  95. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
  96. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 48 to 49
  97. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 49 to pending delivery queue
  98. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: lucidcluster1 (1.1.2)
  99. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: Pacemaker Cluster Manager 1.1.2
  100. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 48
  101. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 49
  102. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
  103. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync configuration service
  104. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] exit_fn for conn=0x9970e0
  105. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] mcasted message added to pending queue
  106. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering 49 to 4a
  107. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] Delivering MCAST message with seq 4a to pending delivery queue
  108. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [CPG ] got procleave message from cluster node 1115334848
  109. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
  110. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] releasing messages up to and including 4a
  111. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
  112. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync profile loading service
  113. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
  114. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [QUORUM] lib_exit_fn: conn=0x99b440
  115. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
  116. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [TOTEM ] sending join/leave message
  117. Nov 23 17:52:48 lucidcluster1 corosync[17168]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
  118.  
  119. And this is 'pkill -TERM corosync':
  120.  
  121. Nov 23 18:03:32 lucidcluster1 corosync[788]: [SERV ] Unloading all Corosync service engines.
  122. Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: Shuting down Pacemaker
  123. Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "crmd"
  124. Nov 23 18:03:32 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to crmd: [802]
  125. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  126. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: crm_shutdown: Requesting shutdown
  127. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
  128. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
  129. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_shutdown_req: Sending shutdown request to DC: lucidcluster1
  130. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: handle_shutdown_request: Creating shutdown request for lucidcluster1 (state=S_POLICY_ENGINE)
  131. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  132. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 42 to 43
  133. Nov 23 18:03:32 lucidcluster1 attrd: [800]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1290535412)
  134. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 43 to pending delivery queue
  135. Nov 23 18:03:32 lucidcluster1 attrd: [800]: info: attrd_perform_update: Sent update 11: shutdown=1290535412
  136. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=lucidcluster1, magic=NA, cib=0.244.17) : Transient attribute: update
  137. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 43
  138. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_pe_invoke: Query 44: Requesting the current CIB: S_POLICY_ENGINE
  139. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  140. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_pe_invoke_callback: Invoking the PE: query=44, ref=pe_calc-dc-1290535412-26, seq=540, quorate=0
  141. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_config: Startup probes: enabled
  142. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  143. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_config: On loss of CCM Quorum: Ignore
  144. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 43 to 45
  145. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  146. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 44 to pending delivery queue
  147. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: unpack_domains: Unpacking domains
  148. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 45 to pending delivery queue
  149. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: determine_online_status: Node lucidcluster1 is shutting down
  150. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 45
  151. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_rsc_op: Operation usmbd_monitor_0 found resource usmbd active on lucidcluster1
  152. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  153. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: unpack_rsc_op: Operation unmbd_monitor_0 found resource unmbd active on lucidcluster1
  154. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  155. Nov 23 18:03:32 lucidcluster1 pengine: [801]: ERROR: unpack_operation: Specifying on_fail=fence and stonith-enabled=false makes no sense
  156. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 45 to 47
  157. Nov 23 18:03:32 lucidcluster1 pengine: [801]: ERROR: unpack_operation: Specifying on_fail=fence and stonith-enabled=false makes no sense
  158. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 46 to pending delivery queue
  159. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: uvsftpd#011(upstart:vsftpd):#011Started lucidcluster1
  160. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 47 to pending delivery queue
  161. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: group_print: Resource Group: samba
  162. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 47
  163. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: usmbd#011(upstart:smbd):#011Started lucidcluster1
  164. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  165. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: native_print: unmbd#011(upstart:nmbd):#011Started lucidcluster1
  166. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 47 to 48
  167. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: clone_print: Clone Set: dlm-clone [dlm]
  168. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 48 to pending delivery queue
  169. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Started: [ lucidcluster1 ]
  170. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 48
  171. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Stopped: [ dlm:1 dlm:2 ]
  172. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: clone_print: Clone Set: clvm-clone [clvm]
  173. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Started: [ lucidcluster1 ]
  174. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: short_print: Stopped: [ clvm:1 clvm:2 ]
  175. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource uvsftpd cannot run anywhere
  176. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: rsc_merge_weights: usmbd: Rolling back scores from unmbd
  177. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource usmbd cannot run anywhere
  178. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource unmbd cannot run anywhere
  179. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: rsc_merge_weights: dlm-clone: Rolling back scores from clvm-clone
  180. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:0 cannot run anywhere
  181. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:1 cannot run anywhere
  182. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource dlm:2 cannot run anywhere
  183. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:0 cannot run anywhere
  184. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:1 cannot run anywhere
  185. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: native_color: Resource clvm:2 cannot run anywhere
  186. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: stage6: Scheduling Node lucidcluster1 for shutdown
  187. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource uvsftpd#011(lucidcluster1)
  188. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource usmbd#011(lucidcluster1)
  189. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource unmbd#011(lucidcluster1)
  190. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource dlm:0#011(lucidcluster1)
  191. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource dlm:1#011(Stopped)
  192. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource dlm:2#011(Stopped)
  193. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Stop resource clvm:0#011(lucidcluster1)
  194. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource clvm:1#011(Stopped)
  195. Nov 23 18:03:32 lucidcluster1 pengine: [801]: notice: LogActions: Leave resource clvm:2#011(Stopped)
  196. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  197. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: unpack_graph: Unpacked transition 4: 13 actions in 13 synapses
  198. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1290535412-26) derived from /var/lib/pengine/pe-input-25734.bz2
  199. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 9: stop uvsftpd_stop_0 on lucidcluster1 (local)
  200. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[11] on upstart::vsftpd::uvsftpd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
  201. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 11 cancelled
  202. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=9:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=uvsftpd_stop_0 )
  203. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[15] on upstart::vsftpd::uvsftpd for client 802, its parameters: to the operation list.
  204. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:uvsftpd:15: stop
  205. Nov 23 18:03:32 lucidcluster1 lrmd: [1517]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
  206. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
  207. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 11: stop unmbd_stop_0 on lucidcluster1 (local)
  208. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[9] on upstart::nmbd::unmbd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
  209. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 9 cancelled
  210. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=11:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=unmbd_stop_0 )
  211. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[16] on upstart::nmbd::unmbd for client 802, its parameters: to the operation list.
  212. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:unmbd:16: stop
  213. Nov 23 18:03:32 lucidcluster1 lrmd: [1518]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
  214. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 24 fired and confirmed
  215. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation uvsftpd_monitor_20000 (call=11, status=1, cib-update=0, confirmed=true) Cancelled
  216. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation unmbd_monitor_20000 (call=9, status=1, cib-update=0, confirmed=true) Cancelled
  217. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 21: stop clvm:0_stop_0 on lucidcluster1 (local)
  218. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[14] on ocf::clvmd::clvm:0 for client 802, its parameters: CRM_meta_clone=[0] daemon_timeout=[20] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[3] CRM_meta_notify=[false] crm_feature_set=[3.0.2] CRM_meta_globally_unique=[false] CRM_meta_on_fail=[fence] CRM_meta_name=[monitor] CRM_meta_interval=[10000] CRM_meta_timeout=[20000] cancelled
  219. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 14 cancelled
  220. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=21:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=clvm:0_stop_0 )
  221. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[17] on ocf::clvmd::clvm:0 for client 802, its parameters: to the operation list.
  222. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:clvm:0:17: stop
  223. Nov 23 18:03:32 lucidcluster1 lrmd: [1519]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
  224. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation clvm:0_monitor_10000 (call=14, status=1, cib-update=0, confirmed=true) Cancelled
  225. Nov 23 18:03:32 lucidcluster1 init: vsftpd main process (1184) killed by TERM signal
  226. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed unmbd:stop process 1518 exited with return code 0.
  227. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation unmbd_stop_0 (call=16, rc=0, cib-update=45, confirmed=true) ok
  228. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action unmbd_stop_0 (11) confirmed on lucidcluster1 (rc=0)
  229. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 10: stop usmbd_stop_0 on lucidcluster1 (local)
  230. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  231. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  232. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 48 to 4a
  233. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 49 to pending delivery queue
  234. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4a to pending delivery queue
  235. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4a
  236. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[8] on upstart::smbd::usmbd for client 802, its parameters: CRM_meta_name=[monitor] crm_feature_set=[3.0.2] CRM_meta_timeout=[20000] CRM_meta_interval=[20000] cancelled
  237. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 8 cancelled
  238. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=10:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=usmbd_stop_0 )
  239. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[18] on upstart::smbd::usmbd for client 802, its parameters: to the operation list.
  240. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: rsc:usmbd:18: stop
  241. Nov 23 18:03:32 lucidcluster1 lrmd: [1521]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
  242. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation usmbd_monitor_20000 (call=8, status=1, cib-update=0, confirmed=true) Cancelled
  243. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed uvsftpd:stop process 1517 exited with return code 0.
  244. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation uvsftpd_stop_0 (call=15, rc=0, cib-update=46, confirmed=true) ok
  245. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action uvsftpd_stop_0 (9) confirmed on lucidcluster1 (rc=0)
  246. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  247. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  248. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4a to 4c
  249. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4b to pending delivery queue
  250. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4c to pending delivery queue
  251. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4c
  252. Nov 23 18:03:32 lucidcluster1 clvmd[1519]: INFO: Stopping clvm:0
  253. Nov 23 18:03:32 lucidcluster1 clvmd[1519]: INFO: Stopping clvmd
  254. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got leave request on 0x1f96e20
  255. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  256. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4c to 4d
  257. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4d to pending delivery queue
  258. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
  259. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4d
  260. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f96e20
  261. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f96e20
  262. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f8e760
  263. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f8e760
  264. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  265. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4d to 4e
  266. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4e to pending delivery queue
  267. Nov 23 18:03:32 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
  268. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 4e
  269. Nov 23 18:03:32 lucidcluster1 corosync[788]: [QUORUM] lib_exit_fn: conn=0x1f92ac0
  270. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: RA output: (usmbd:stop:stderr) process 1521:
  271. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: RA output: (usmbd:stop:stderr) The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.#012Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
  272. Nov 23 18:03:32 lucidcluster1 lrmd: [799]: info: Managed usmbd:stop process 1521 exited with return code 0.
  273. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation usmbd_stop_0 (call=18, rc=0, cib-update=47, confirmed=true) ok
  274. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: match_graph_event: Action usmbd_stop_0 (10) confirmed on lucidcluster1 (rc=0)
  275. Nov 23 18:03:32 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
  276. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  277. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  278. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering 4e to 50
  279. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 4f to pending delivery queue
  280. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 50 to pending delivery queue
  281. Nov 23 18:03:32 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 50
  282. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: process_pe_message: Transition 4: PEngine Input stored in: /var/lib/pengine/pe-input-25734.bz2
  283. Nov 23 18:03:32 lucidcluster1 pengine: [801]: info: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  284. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: Managed clvm:0:stop process 1519 exited with return code 0.
  285. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation clvm:0_stop_0 (call=17, rc=0, cib-update=48, confirmed=true) ok
  286. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: match_graph_event: Action clvm:0_stop_0 (21) confirmed on lucidcluster1 (rc=0)
  287. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 25 fired and confirmed
  288. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 19 fired and confirmed
  289. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: te_rsc_command: Initiating action 16: stop dlm:0_stop_0 on lucidcluster1 (local)
  290. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  291. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  292. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering 50 to 52
  293. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 51 to pending delivery queue
  294. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 52 to pending delivery queue
  295. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 52
  296. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: cancel_op: operation monitor[12] on ocf::controld::dlm:0 for client 802, its parameters: CRM_meta_clone=[0] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[3] CRM_meta_notify=[false] crm_feature_set=[3.0.2] CRM_meta_globally_unique=[false] args=[-q 0] CRM_meta_on_fail=[fence] CRM_meta_name=[monitor] CRM_meta_interval=[20000] CRM_meta_timeout=[20000] cancelled
  297. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: debug: on_msg_cancel_op: operation 12 cancelled
  298. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: do_lrm_rsc_op: Performing key=16:4:0:6ee25bb4-d326-40fd-aa9f-74bb3c0fd895 op=dlm:0_stop_0 )
  299. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: debug: on_msg_perform_op: add an operation operation stop[19] on ocf::controld::dlm:0 for client 802, its parameters: to the operation list.
  300. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: rsc:dlm:0:19: stop
  301. Nov 23 18:03:33 lucidcluster1 lrmd: [1537]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER
  302. Nov 23 18:03:33 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation dlm:0_monitor_20000 (call=12, status=1, cib-update=0, confirmed=true) Cancelled
  303. Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] got leave request on 0x1f8a400
  304. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  305. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering 52 to 53
  306. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 53 to pending delivery queue
  307. Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] got procleave message from cluster node 1115334848
  308. Nov 23 18:03:33 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 53
  309. Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] cpg finalize for conn=0x1f8a400
  310. Nov 23 18:03:33 lucidcluster1 dlm_controld.pcmk: [1198]: notice: terminate_ais_connection: Disconnecting from AIS
  311. Nov 23 18:03:33 lucidcluster1 lrmd: [799]: info: RA output: (dlm:0:stop:stderr) dlm_controld.pcmk: no process found
  312. Nov 23 18:03:33 lucidcluster1 corosync[788]: [CKPT ] checkpoint exit conn 0x1f860a0
  313. Nov 23 18:03:33 lucidcluster1 corosync[788]: [CPG ] exit_fn for conn=0x1f8a400
  314. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: info: Managed dlm:0:stop process 1537 exited with return code 0.
  315. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: process_lrm_event: LRM operation dlm:0_stop_0 (call=19, rc=0, cib-update=49, confirmed=true) ok
  316. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: match_graph_event: Action dlm:0_stop_0 (16) confirmed on lucidcluster1 (rc=0)
  317. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 20 fired and confirmed
  318. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_pseudo_action: Pseudo action 6 fired and confirmed
  319. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_crm_command: Executing crm-event (28): do_shutdown on lucidcluster1
  320. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_crm_command: crm-event (28) is a local shutdown
  321. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: run_graph: ====================================================
  322. Nov 23 18:03:34 lucidcluster1 crmd: [802]: notice: run_graph: Transition 4 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-25734.bz2): Complete
  323. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: te_graph_trigger: Transition 4 is now complete
  324. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_STOPPING [ input=I_STOP cause=C_FSA_INTERNAL origin=notify_crmd ]
  325. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_dc_release: DC role released
  326. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  327. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: pe_connection_destroy: Connection to the Policy Engine released
  328. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  329. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_te_control: Transitioner is now inactive
  330. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering 53 to 55
  331. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_shutdown: Disconnecting STONITH...
  332. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 54 to pending delivery queue
  333. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
  334. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 55 to pending delivery queue
  335. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc clvm:0 is LRM_RSC_IDLE
  336. Nov 23 18:03:34 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 55
  337. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc unmbd is LRM_RSC_IDLE
  338. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc dlm:0 is LRM_RSC_IDLE
  339. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc uvsftpd is LRM_RSC_IDLE
  340. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_msg_get_state:state of rsc usmbd is LRM_RSC_IDLE
  341. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_lrm_control: Disconnected from the LRM
  342. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: on_receive_cmd: the IPC to client [pid:802] disconnected.
  343. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_ha_control: Disconnected from OpenAIS
  344. Nov 23 18:03:34 lucidcluster1 lrmd: [799]: debug: unregister_client: client crmd [pid:802] is unregistered
  345. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_cib_control: Disconnecting CIB
  346. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
  347. Nov 23 18:03:34 lucidcluster1 cib: [798]: info: cib_process_readwrite: We are now in R/O mode
  348. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
  349. Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: send_ipc_message: IPC Channel to 802 is not connected
  350. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_dc_release ]
  351. Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: send_via_callback_channel: Delivery of reply to client 802/3b536b63-580c-4934-a18b-66c21c31d557 failed
  352. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: free_mem: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
  353. Nov 23 18:03:34 lucidcluster1 cib: [798]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed
  354. Nov 23 18:03:34 lucidcluster1 crmd: [802]: info: do_exit: [crmd] stopped (0)
  355. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client crmd (conn=0x1f7d9e0, async-conn=0x1f7d9e0) left
  356. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: crmd confirmed stopped
  357. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "pengine"
  358. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to pengine: [801]
  359. Nov 23 18:03:34 lucidcluster1 pengine: [801]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  360. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: pengine confirmed stopped
  361. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "attrd"
  362. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to attrd: [800]
  363. Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  364. Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: attrd_shutdown: Exiting
  365. Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: main: Exiting...
  366. Nov 23 18:03:34 lucidcluster1 attrd: [800]: info: attrd_cib_connection_destroy: Connection to the CIB terminated...
  367. Nov 23 18:03:34 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client attrd (conn=0x1f6b860, async-conn=0x1f6b860) left
  368. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: attrd confirmed stopped
  369. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "lrmd"
  370. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to lrmd: [799]
  371. Nov 23 18:03:35 lucidcluster1 lrmd: [799]: info: lrmd is shutting down
  372. Nov 23 18:03:35 lucidcluster1 lrmd: [799]: debug: [lrmd] stopped
  373. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: lrmd confirmed stopped
  374. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "cib"
  375. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to cib: [798]
  376. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  377. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_shutdown: Disconnected 0 clients
  378. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_process_disconnect: All clients disconnected...
  379. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
  380. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: cib_ha_connection_destroy: Exiting...
  381. Nov 23 18:03:35 lucidcluster1 cib: [798]: info: main: Done
  382. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client cib (conn=0x1f79680, async-conn=0x1f79680) left
  383. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: cib confirmed stopped
  384. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: stop_child: Stopping CRM child "stonith-ng"
  385. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: stop_child: Sent -15 to stonith-ng: [797]
  386. Nov 23 18:03:35 lucidcluster1 stonith-ng: [797]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  387. Nov 23 18:03:35 lucidcluster1 stonith-ng: [797]: info: stonith_shutdown: Terminating with 0 clients
  388. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: pcmk_ipc_exit: Client stonith-ng (conn=0x1f67340, async-conn=0x1f67340) left
  389. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: stonith-ng confirmed stopped
  390. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: send_cluster_id: Born-on set to: 540 (age)
  391. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: send_cluster_id: Local update: id=1115334848, born=540, seq=540
  392. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: update_member: 0x1f5d8c0 Node 1115334848 ((null)) born on: 540
  393. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] info: update_member: Node lucidcluster1 now has process list: 00000000000000000000000000000002 (2)
  394. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] notice: pcmk_shutdown: Shutdown complete
  395. Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] mcasted message added to pending queue
  396. Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] Delivering 55 to 56
  397. Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] Delivering MCAST message with seq 56 to pending delivery queue
  398. Nov 23 18:03:35 lucidcluster1 corosync[788]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: lucidcluster1 (1.1.2)
  399. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: Pacemaker Cluster Manager 1.1.2
  400. Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] releasing messages up to and including 56
  401. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
  402. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync configuration service
  403. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
  404. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
  405. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync profile loading service
  406. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
  407. Nov 23 18:03:35 lucidcluster1 corosync[788]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
  408. Nov 23 18:03:35 lucidcluster1 corosync[788]: [TOTEM ] sending join/leave message
  409. Nov 23 18:03:35 lucidcluster1 corosync[788]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement