Advertisement
Guest User

Untitled

a guest
Oct 26th, 2014
614
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 34.57 KB | None | 0 0
  1. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: standby (true)
  2. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent update 31: standby=true
  3. Oct 27 10:28:10 node02 pacemaker: Waiting for shutdown of managed resources
  4. Oct 27 10:28:10 node02 crmd[6866]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  5. Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  6. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  7. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  8. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  9. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  10. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Move ClusterIP#011(Started node02 -> node01)
  11. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Demote WebData:0#011(Master -> Stopped node02)
  12. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
  13. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Move WebFS#011(Started node02 -> node01)
  14. Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-119.bz2
  15. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 7: stop ClusterIP_stop_0 on node02 (local)
  16. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 2: cancel WebData_cancel_60000 on node01
  17. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 39: stop WebFS_stop_0 on node02 (local)
  18. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 52: notify WebData_pre_notify_demote_0 on node02 (local)
  19. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 54: notify WebData_pre_notify_demote_0 on node01
  20. Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: Running stop for /dev/drbd/by-res/wwwdata on /var/www/html
  21. Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: Trying to unmount /var/www/html
  22. Oct 27 10:28:10 node02 IPaddr2(ClusterIP)[9164]: INFO: IP status = ok, IP_CIP=
  23. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation ClusterIP_stop_0 (call=76, rc=0, cib-update=51, confirmed=true) ok
  24. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=81, rc=0, cib-update=0, confirmed=true) ok
  25. Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: unmounted /var/www/html successfully
  26. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebFS_stop_0 (call=78, rc=0, cib-update=52, confirmed=true) ok
  27. Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 11 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=13, Source=/var/lib/pacemaker/pengine/pe-input-119.bz2): Stopped
  28. Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  29. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  30. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  31. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  32. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  33. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start ClusterIP#011(node01)
  34. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Demote WebData:0#011(Master -> Stopped node02)
  35. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
  36. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
  37. Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 12: /var/lib/pacemaker/pengine/pe-input-120.bz2
  38. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 5: start ClusterIP_start_0 on node01
  39. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 48: notify WebData_pre_notify_demote_0 on node02 (local)
  40. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 50: notify WebData_pre_notify_demote_0 on node01
  41. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=86, rc=0, cib-update=0, confirmed=true) ok
  42. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 7: demote WebData_demote_0 on node02 (local)
  43. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 6: monitor ClusterIP_monitor_30000 on node01
  44. Oct 27 10:28:10 node02 kernel: block drbd1: role( Primary -> Secondary )
  45. Oct 27 10:28:10 node02 kernel: block drbd1: bitmap WRITE of 0 pages took 0 jiffies
  46. Oct 27 10:28:10 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
  47. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_demote_0 (call=89, rc=0, cib-update=54, confirmed=true) ok
  48. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 49: notify WebData_post_notify_demote_0 on node02 (local)
  49. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 51: notify WebData_post_notify_demote_0 on node01
  50. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
  51. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent update 33: master-WebData=1000
  52. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=92, rc=0, cib-update=0, confirmed=true) ok
  53. Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 12 (Complete=13, Pending=0, Fired=0, Skipped=13, Incomplete=8, Source=/var/lib/pacemaker/pengine/pe-input-120.bz2): Stopped
  54. Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  55. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  56. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  57. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  58. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  59. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Stop WebData:0#011(node02)
  60. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
  61. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
  62. Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 13: /var/lib/pacemaker/pengine/pe-input-121.bz2
  63. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 44: notify WebData_pre_notify_stop_0 on node02 (local)
  64. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 45: notify WebData_pre_notify_stop_0 on node01
  65. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=95, rc=0, cib-update=0, confirmed=true) ok
  66. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 8: stop WebData_stop_0 on node02 (local)
  67. Oct 27 10:28:10 node02 kernel: drbd wwwdata: conn( WFConnection -> Disconnecting )
  68. Oct 27 10:28:10 node02 kernel: drbd wwwdata: Discarding network configuration.
  69. Oct 27 10:28:10 node02 kernel: drbd wwwdata: Connection closed
  70. Oct 27 10:28:10 node02 kernel: drbd wwwdata: conn( Disconnecting -> StandAlone )
  71. Oct 27 10:28:10 node02 kernel: drbd wwwdata: receiver terminated
  72. Oct 27 10:28:10 node02 kernel: drbd wwwdata: Terminating drbd_r_wwwdata
  73. Oct 27 10:28:10 node02 kernel: block drbd1: disk( UpToDate -> Failed )
  74. Oct 27 10:28:10 node02 kernel: block drbd1: bitmap WRITE of 0 pages took 0 jiffies
  75. Oct 27 10:28:10 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
  76. Oct 27 10:28:10 node02 kernel: block drbd1: disk( Failed -> Diskless )
  77. Oct 27 10:28:10 node02 kernel: block drbd1: drbd_bm_resize called with capacity == 0
  78. Oct 27 10:28:10 node02 kernel: drbd wwwdata: Terminating drbd_w_wwwdata
  79. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (<null>)
  80. Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent delete 35: node=node02, attr=master-WebData, id=<n/a>, set=(null), section=status
  81. Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_stop_0 (call=98, rc=0, cib-update=56, confirmed=true) ok
  82. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 46: notify WebData_post_notify_stop_0 on node01
  83. Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 13 (Complete=10, Pending=0, Fired=0, Skipped=7, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-121.bz2): Stopped
  84. Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  85. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  86. Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  87. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  88. Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  89. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:0#011(Slave -> Master node01)
  90. Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
  91. Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 14: /var/lib/pacemaker/pengine/pe-input-122.bz2
  92. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 45: notify WebData_pre_notify_promote_0 on node01
  93. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 10: promote WebData_promote_0 on node01
  94. Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 46: notify WebData_post_notify_promote_0 on node01
  95. Oct 27 10:28:11 node02 crmd[6866]: notice: run_graph: Transition 14 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-122.bz2): Stopped
  96. Oct 27 10:28:11 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  97. Oct 27 10:28:11 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  98. Oct 27 10:28:11 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  99. Oct 27 10:28:11 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  100. Oct 27 10:28:11 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  101. Oct 27 10:28:11 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
  102. Oct 27 10:28:11 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 15: /var/lib/pacemaker/pengine/pe-input-123.bz2
  103. Oct 27 10:28:11 node02 crmd[6866]: notice: te_rsc_command: Initiating action 36: start WebFS_start_0 on node01
  104. Oct 27 10:28:11 node02 crmd[6866]: notice: run_graph: Transition 15 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-123.bz2): Complete
  105. Oct 27 10:28:11 node02 crmd[6866]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  106. Oct 27 10:28:12 node02 pacemaker: Leaving fence domain
  107. Oct 27 10:28:13 node02 pacemaker: Stopping fenced 6697
  108. Oct 27 10:28:13 node02 pacemaker: Signaling Pacemaker Cluster Manager to terminate
  109. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: pcmk_shutdown_worker: Shuting down Pacemaker
  110. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping crmd: Sent -15 to process 6866
  111. Oct 27 10:28:13 node02 crmd[6866]: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
  112. Oct 27 10:28:13 node02 crmd[6866]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
  113. Oct 27 10:28:13 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1414376893)
  114. Oct 27 10:28:13 node02 attrd[6864]: notice: attrd_perform_update: Sent update 40: shutdown=1414376893
  115. Oct 27 10:28:13 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
  116. Oct 27 10:28:13 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
  117. Oct 27 10:28:13 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
  118. Oct 27 10:28:13 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
  119. Oct 27 10:28:13 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
  120. Oct 27 10:28:13 node02 pengine[6865]: notice: stage6: Scheduling Node node02 for shutdown
  121. Oct 27 10:28:13 node02 pacemaker: Waiting for cluster services to unload
  122. Oct 27 10:28:13 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 16: /var/lib/pacemaker/pengine/pe-input-124.bz2
  123. Oct 27 10:28:13 node02 crmd[6866]: notice: run_graph: Transition 16 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-124.bz2): Complete
  124. Oct 27 10:28:13 node02 crmd[6866]: notice: lrm_state_verify_stopped: Stopped 0 recurring operations at shutdown... waiting (0 ops remaining)
  125. Oct 27 10:28:13 node02 crmd[6866]: notice: do_lrm_control: Disconnected from the LRM
  126. Oct 27 10:28:13 node02 crmd[6866]: notice: terminate_cs_connection: Disconnecting from Corosync
  127. Oct 27 10:28:13 node02 cib[6861]: warning: qb_ipcs_event_sendv: new_event_notification (6861-6866-11): Broken pipe (32)
  128. Oct 27 10:28:13 node02 cib[6861]: warning: do_local_notify: A-Sync reply to crmd failed: No message of desired type
  129. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping pengine: Sent -15 to process 6865
  130. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping attrd: Sent -15 to process 6864
  131. Oct 27 10:28:13 node02 attrd[6864]: notice: main: Exiting...
  132. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping lrmd: Sent -15 to process 6863
  133. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping stonith-ng: Sent -15 to process 6862
  134. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping cib: Sent -15 to process 6861
  135. Oct 27 10:28:13 node02 cib[6861]: warning: qb_ipcs_event_sendv: new_event_notification (6861-6862-12): Broken pipe (32)
  136. Oct 27 10:28:13 node02 cib[6861]: warning: cib_notify_send_one: Notification of client crmd/ae320fe4-fd6f-45f2-b9d2-b47b146e1143 failed
  137. Oct 27 10:28:13 node02 cib[6861]: notice: terminate_cs_connection: Disconnecting from Corosync
  138. Oct 27 10:28:13 node02 cib[6861]: notice: terminate_cs_connection: Disconnecting from Corosync
  139. Oct 27 10:28:13 node02 pacemakerd[6855]: notice: pcmk_shutdown_worker: Shutdown complete
  140. Oct 27 10:28:15 node02 kernel: dlm: closing connection to node 1
  141. Oct 27 10:28:15 node02 kernel: dlm: closing connection to node 2
  142. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Unloading all Corosync service engines.
  143. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
  144. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync configuration service
  145. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
  146. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
  147. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync profile loading service
  148. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
  149. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync CMAN membership service 2.90
  150. Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
  151. Oct 27 10:28:15 node02 corosync[6644]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:1947.
  152. Oct 27 10:28:32 node02 kernel: DLM (built Sep 9 2014 21:37:32) installed
  153. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
  154. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Corosync built-in features: nss dbus rdma snmp
  155. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf
  156. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Successfully parsed cman config
  157. Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] Initializing transport (UDP/IP Multicast).
  158. Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
  159. Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] The network interface [192.168.1.112] is now up.
  160. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Using quorum provider quorum_cman
  161. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1
  162. Oct 27 10:28:32 node02 corosync[9648]: [CMAN ] CMAN 3.0.12.1 (built Sep 25 2014 15:07:47) started
  163. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync CMAN membership service 2.90
  164. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: openais checkpoint service B.01.01
  165. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync extended virtual synchrony service
  166. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync configuration service
  167. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01
  168. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster config database access v1.01
  169. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync profile loading service
  170. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Using quorum provider quorum_cman
  171. Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1
  172. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.
  173. Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
  174. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[1]: 2
  175. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[1]: 2
  176. Oct 27 10:28:32 node02 corosync[9648]: [CPG ] chosen downlist: sender r(0) ip(192.168.1.112) ; members(old:0 left:0)
  177. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Completed service synchronization, ready to provide service.
  178. Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
  179. Oct 27 10:28:32 node02 corosync[9648]: [CMAN ] quorum regained, resuming activity
  180. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] This node is within the primary component and will provide service.
  181. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[2]: 1 2
  182. Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[2]: 1 2
  183. Oct 27 10:28:32 node02 corosync[9648]: [CPG ] chosen downlist: sender r(0) ip(192.168.1.111) ; members(old:1 left:0)
  184. Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Completed service synchronization, ready to provide service.
  185. Oct 27 10:28:36 node02 fenced[9701]: fenced 3.0.12.1 started
  186. Oct 27 10:28:36 node02 dlm_controld[9720]: dlm_controld 3.0.12.1 started
  187. Oct 27 10:28:37 node02 gfs_controld[9776]: gfs_controld 3.0.12.1 started
  188. Oct 27 10:28:38 node02 pacemaker: Starting Pacemaker Cluster Manager
  189. Oct 27 10:28:38 node02 pacemakerd[9859]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  190. Oct 27 10:28:38 node02 pacemakerd[9859]: notice: main: Starting Pacemaker 1.1.10-14.el6_5.3 (Build: 368c726): generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman
  191. Oct 27 10:28:38 node02 lrmd[9867]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  192. Oct 27 10:28:38 node02 cib[9865]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  193. Oct 27 10:28:38 node02 stonith-ng[9866]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  194. Oct 27 10:28:38 node02 attrd[9868]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  195. Oct 27 10:28:38 node02 pengine[9869]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  196. Oct 27 10:28:38 node02 attrd[9868]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
  197. Oct 27 10:28:38 node02 stonith-ng[9866]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
  198. Oct 27 10:28:38 node02 crmd[9870]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  199. Oct 27 10:28:38 node02 crmd[9870]: notice: main: CRM Git Version: 368c726
  200. Oct 27 10:28:38 node02 attrd[9868]: notice: main: Starting mainloop...
  201. Oct 27 10:28:38 node02 cib[9865]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
  202. Oct 27 10:28:39 node02 stonith-ng[9866]: notice: setup_cib: Watching for stonith topology changes
  203. Oct 27 10:28:39 node02 crmd[9870]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
  204. Oct 27 10:28:39 node02 stonith-ng[9866]: notice: unpack_config: On loss of CCM Quorum: Ignore
  205. Oct 27 10:28:39 node02 crmd[9870]: notice: cman_event_callback: Membership 68: quorum acquired
  206. Oct 27 10:28:39 node02 crmd[9870]: notice: crm_update_peer_state: cman_event_callback: Node node01[1] - state is now member (was (null))
  207. Oct 27 10:28:39 node02 crmd[9870]: notice: crm_update_peer_state: cman_event_callback: Node node02[2] - state is now member (was (null))
  208. Oct 27 10:28:39 node02 crmd[9870]: notice: do_started: The local CRM is operational
  209. Oct 27 10:28:39 node02 crmd[9870]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  210. Oct 27 10:28:41 node02 crmd[9870]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  211. Oct 27 10:28:41 node02 attrd[9868]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  212. Oct 27 10:28:42 node02 Filesystem(WebFS)[9881]: WARNING: Couldn't find device [/dev/drbd/by-res/wwwdata]. Expected /dev/??? to exist
  213. Oct 27 10:28:42 node02 apache(WebSite)[9879]: INFO: apache not running
  214. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_monitor_0 (call=5, rc=7, cib-update=8, confirmed=true) not running
  215. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebFS_monitor_0 (call=18, rc=7, cib-update=9, confirmed=true) not running
  216. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebSite_monitor_0 (call=9, rc=7, cib-update=10, confirmed=true) not running
  217. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_monitor_0 (call=14, rc=7, cib-update=11, confirmed=true) not running
  218. Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  219. Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: Adding inet address 192.168.1.110/24 with broadcast address 192.168.1.255 to device eth0
  220. Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: Bringing device eth0 up
  221. Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.1.110 eth0 192.168.1.110 auto not_used not_used
  222. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_start_0 (call=26, rc=0, cib-update=12, confirmed=true) ok
  223. Oct 27 10:28:42 node02 kernel: drbd wwwdata: Starting worker thread (from drbdsetup-84 [10178])
  224. Oct 27 10:28:42 node02 kernel: block drbd1: disk( Diskless -> Attaching )
  225. Oct 27 10:28:42 node02 kernel: drbd wwwdata: Method to ensure write ordering: flush
  226. Oct 27 10:28:42 node02 kernel: block drbd1: max BIO size = 1048576
  227. Oct 27 10:28:42 node02 kernel: block drbd1: drbd_bm_resize called with capacity == 2097016
  228. Oct 27 10:28:42 node02 kernel: block drbd1: resync bitmap: bits=262127 words=4096 pages=8
  229. Oct 27 10:28:42 node02 kernel: block drbd1: size = 1024 MB (1048508 KB)
  230. Oct 27 10:28:42 node02 kernel: block drbd1: recounting of set bits took additional 0 jiffies
  231. Oct 27 10:28:42 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
  232. Oct 27 10:28:42 node02 kernel: block drbd1: disk( Attaching -> UpToDate )
  233. Oct 27 10:28:42 node02 kernel: block drbd1: attached to UUIDs 57BBF4225313C9D1:C3FC762020E707F1:A741491FB4A536A4:0000000000000004
  234. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_monitor_30000 (call=29, rc=0, cib-update=13, confirmed=false) ok
  235. Oct 27 10:28:42 node02 kernel: drbd wwwdata: conn( StandAlone -> Unconnected )
  236. Oct 27 10:28:42 node02 kernel: drbd wwwdata: Starting receiver thread (from drbd_w_wwwdata [10180])
  237. Oct 27 10:28:42 node02 kernel: drbd wwwdata: receiver (re)started
  238. Oct 27 10:28:42 node02 kernel: drbd wwwdata: conn( Unconnected -> WFConnection )
  239. Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
  240. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_start_0 (call=24, rc=0, cib-update=14, confirmed=true) ok
  241. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=33, rc=0, cib-update=0, confirmed=true) ok
  242. Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
  243. Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_monitor_60000 (call=36, rc=0, cib-update=15, confirmed=false) ok
  244. Oct 27 10:28:43 node02 attrd[9868]: notice: attrd_perform_update: Sent update 7: master-WebData=1000
  245. Oct 27 10:28:43 node02 attrd[9868]: notice: attrd_perform_update: Sent update 10: probe_complete=true
  246. Oct 27 10:28:43 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=42, rc=0, cib-update=0, confirmed=true) ok
  247. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=45, rc=0, cib-update=0, confirmed=true) ok
  248. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=48, rc=0, cib-update=0, confirmed=true) ok
  249. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=51, rc=0, cib-update=0, confirmed=true) ok
  250. Oct 27 10:28:44 node02 kernel: block drbd1: role( Secondary -> Primary )
  251. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_promote_0 (call=54, rc=0, cib-update=17, confirmed=true) ok
  252. Oct 27 10:28:44 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (10000)
  253. Oct 27 10:28:44 node02 attrd[9868]: notice: attrd_perform_update: Sent update 15: master-WebData=10000
  254. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=57, rc=0, cib-update=0, confirmed=true) ok
  255. Oct 27 10:28:44 node02 Filesystem(WebFS)[10457]: INFO: Running start for /dev/drbd/by-res/wwwdata on /var/www/html
  256. Oct 27 10:28:44 node02 kernel: EXT4-fs (drbd1): mounted filesystem with ordered data mode. Opts:
  257. Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebFS_start_0 (call=60, rc=0, cib-update=18, confirmed=true) ok
  258. Oct 27 10:28:44 node02 apache(WebSite)[10515]: ERROR: Syntax error on line 292 of /etc/httpd/conf/httpd.conf: DocumentRoot must be a directory
  259. Oct 27 10:28:44 node02 apache(WebSite)[10515]: INFO: apache not running
  260. Oct 27 10:28:44 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  261. Oct 27 10:28:45 node02 apache(WebSite)[10515]: INFO: apache not running
  262. Oct 27 10:28:45 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  263. Oct 27 10:28:46 node02 apache(WebSite)[10515]: INFO: apache not running
  264. Oct 27 10:28:46 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  265. Oct 27 10:28:47 node02 apache(WebSite)[10515]: INFO: apache not running
  266. Oct 27 10:28:47 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  267. Oct 27 10:28:48 node02 apache(WebSite)[10515]: INFO: apache not running
  268. Oct 27 10:28:48 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  269. Oct 27 10:28:49 node02 apache(WebSite)[10515]: INFO: apache not running
  270. Oct 27 10:28:49 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  271. Oct 27 10:28:50 node02 apache(WebSite)[10515]: INFO: apache not running
  272. Oct 27 10:28:50 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  273. Oct 27 10:28:51 node02 apache(WebSite)[10515]: INFO: apache not running
  274. Oct 27 10:28:51 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  275. Oct 27 10:28:52 node02 apache(WebSite)[10515]: INFO: apache not running
  276. Oct 27 10:28:52 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  277. Oct 27 10:28:53 node02 apache(WebSite)[10515]: INFO: apache not running
  278. Oct 27 10:28:53 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  279. Oct 27 10:28:54 node02 apache(WebSite)[10515]: INFO: apache not running
  280. Oct 27 10:28:54 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  281. Oct 27 10:28:55 node02 apache(WebSite)[10515]: INFO: apache not running
  282. Oct 27 10:28:55 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  283. Oct 27 10:28:56 node02 apache(WebSite)[10515]: INFO: apache not running
  284. Oct 27 10:28:56 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  285. Oct 27 10:28:57 node02 apache(WebSite)[10515]: INFO: apache not running
  286. Oct 27 10:28:57 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  287. Oct 27 10:28:58 node02 apache(WebSite)[10515]: INFO: apache not running
  288. Oct 27 10:28:58 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  289. Oct 27 10:28:59 node02 apache(WebSite)[10515]: INFO: apache not running
  290. Oct 27 10:28:59 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  291. Oct 27 10:29:00 node02 apache(WebSite)[10515]: INFO: apache not running
  292. Oct 27 10:29:00 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  293. Oct 27 10:29:01 node02 apache(WebSite)[10515]: INFO: apache not running
  294. Oct 27 10:29:01 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  295. Oct 27 10:29:02 node02 apache(WebSite)[10515]: INFO: apache not running
  296. Oct 27 10:29:02 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  297. Oct 27 10:29:03 node02 apache(WebSite)[10515]: INFO: apache not running
  298. Oct 27 10:29:03 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
  299. Oct 27 10:29:04 node02 lrmd[9867]: warning: child_timeout_callback: WebSite_start_0 process (PID 10515) timed out
  300. Oct 27 10:29:04 node02 lrmd[9867]: warning: operation_finished: WebSite_start_0:10515 - timed out after 20000ms
  301. Oct 27 10:29:04 node02 crmd[9870]: error: process_lrm_event: LRM operation WebSite_start_0 (63) Timed Out (timeout=20000ms)
  302. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
  303. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-WebSite (INFINITY)
  304. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 18: fail-count-WebSite=INFINITY
  305. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
  306. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-WebSite (1414376955)
  307. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 21: last-failure-WebSite=1414376955
  308. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
  309. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-WebSite (INFINITY)
  310. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 24: fail-count-WebSite=INFINITY
  311. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
  312. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-WebSite (1414376955)
  313. Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 27: last-failure-WebSite=1414376955
  314. Oct 27 10:29:04 node02 apache(WebSite)[10780]: INFO: apache is not running.
  315. Oct 27 10:29:04 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebSite_stop_0 (call=66, rc=0, cib-update=20, confirmed=true) ok
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement