Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: standby (true)
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent update 31: standby=true
- Oct 27 10:28:10 node02 pacemaker: Waiting for shutdown of managed resources
- Oct 27 10:28:10 node02 crmd[6866]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
- Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Move ClusterIP#011(Started node02 -> node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Demote WebData:0#011(Master -> Stopped node02)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Move WebFS#011(Started node02 -> node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-119.bz2
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 7: stop ClusterIP_stop_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 2: cancel WebData_cancel_60000 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 39: stop WebFS_stop_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 52: notify WebData_pre_notify_demote_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 54: notify WebData_pre_notify_demote_0 on node01
- Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: Running stop for /dev/drbd/by-res/wwwdata on /var/www/html
- Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: Trying to unmount /var/www/html
- Oct 27 10:28:10 node02 IPaddr2(ClusterIP)[9164]: INFO: IP status = ok, IP_CIP=
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation ClusterIP_stop_0 (call=76, rc=0, cib-update=51, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=81, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:10 node02 Filesystem(WebFS)[9165]: INFO: unmounted /var/www/html successfully
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebFS_stop_0 (call=78, rc=0, cib-update=52, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 11 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=13, Source=/var/lib/pacemaker/pengine/pe-input-119.bz2): Stopped
- Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start ClusterIP#011(node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Demote WebData:0#011(Master -> Stopped node02)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 12: /var/lib/pacemaker/pengine/pe-input-120.bz2
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 5: start ClusterIP_start_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 48: notify WebData_pre_notify_demote_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 50: notify WebData_pre_notify_demote_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=86, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 7: demote WebData_demote_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 6: monitor ClusterIP_monitor_30000 on node01
- Oct 27 10:28:10 node02 kernel: block drbd1: role( Primary -> Secondary )
- Oct 27 10:28:10 node02 kernel: block drbd1: bitmap WRITE of 0 pages took 0 jiffies
- Oct 27 10:28:10 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_demote_0 (call=89, rc=0, cib-update=54, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 49: notify WebData_post_notify_demote_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 51: notify WebData_post_notify_demote_0 on node01
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent update 33: master-WebData=1000
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=92, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 12 (Complete=13, Pending=0, Fired=0, Skipped=13, Incomplete=8, Source=/var/lib/pacemaker/pengine/pe-input-120.bz2): Stopped
- Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Stop WebData:0#011(node02)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:1#011(Slave -> Master node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 13: /var/lib/pacemaker/pengine/pe-input-121.bz2
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 44: notify WebData_pre_notify_stop_0 on node02 (local)
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 45: notify WebData_pre_notify_stop_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=95, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 8: stop WebData_stop_0 on node02 (local)
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: conn( WFConnection -> Disconnecting )
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: Discarding network configuration.
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: Connection closed
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: conn( Disconnecting -> StandAlone )
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: receiver terminated
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: Terminating drbd_r_wwwdata
- Oct 27 10:28:10 node02 kernel: block drbd1: disk( UpToDate -> Failed )
- Oct 27 10:28:10 node02 kernel: block drbd1: bitmap WRITE of 0 pages took 0 jiffies
- Oct 27 10:28:10 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
- Oct 27 10:28:10 node02 kernel: block drbd1: disk( Failed -> Diskless )
- Oct 27 10:28:10 node02 kernel: block drbd1: drbd_bm_resize called with capacity == 0
- Oct 27 10:28:10 node02 kernel: drbd wwwdata: Terminating drbd_w_wwwdata
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (<null>)
- Oct 27 10:28:10 node02 attrd[6864]: notice: attrd_perform_update: Sent delete 35: node=node02, attr=master-WebData, id=<n/a>, set=(null), section=status
- Oct 27 10:28:10 node02 crmd[6866]: notice: process_lrm_event: LRM operation WebData_stop_0 (call=98, rc=0, cib-update=56, confirmed=true) ok
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 46: notify WebData_post_notify_stop_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: run_graph: Transition 13 (Complete=10, Pending=0, Fired=0, Skipped=7, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-121.bz2): Stopped
- Oct 27 10:28:10 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Promote WebData:0#011(Slave -> Master node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
- Oct 27 10:28:10 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 14: /var/lib/pacemaker/pengine/pe-input-122.bz2
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 45: notify WebData_pre_notify_promote_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 10: promote WebData_promote_0 on node01
- Oct 27 10:28:10 node02 crmd[6866]: notice: te_rsc_command: Initiating action 46: notify WebData_post_notify_promote_0 on node01
- Oct 27 10:28:11 node02 crmd[6866]: notice: run_graph: Transition 14 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-122.bz2): Stopped
- Oct 27 10:28:11 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:11 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:11 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:11 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:11 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:11 node02 pengine[6865]: notice: LogActions: Start WebFS#011(node01)
- Oct 27 10:28:11 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 15: /var/lib/pacemaker/pengine/pe-input-123.bz2
- Oct 27 10:28:11 node02 crmd[6866]: notice: te_rsc_command: Initiating action 36: start WebFS_start_0 on node01
- Oct 27 10:28:11 node02 crmd[6866]: notice: run_graph: Transition 15 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-123.bz2): Complete
- Oct 27 10:28:11 node02 crmd[6866]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Oct 27 10:28:12 node02 pacemaker: Leaving fence domain
- Oct 27 10:28:13 node02 pacemaker: Stopping fenced 6697
- Oct 27 10:28:13 node02 pacemaker: Signaling Pacemaker Cluster Manager to terminate
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: pcmk_shutdown_worker: Shuting down Pacemaker
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping crmd: Sent -15 to process 6866
- Oct 27 10:28:13 node02 crmd[6866]: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
- Oct 27 10:28:13 node02 crmd[6866]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
- Oct 27 10:28:13 node02 attrd[6864]: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1414376893)
- Oct 27 10:28:13 node02 attrd[6864]: notice: attrd_perform_update: Sent update 40: shutdown=1414376893
- Oct 27 10:28:13 node02 pengine[6865]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:13 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node02: unknown error (1)
- Oct 27 10:28:13 node02 pengine[6865]: warning: unpack_rsc_op: Processing failed op start for WebSite on node01: unknown error (1)
- Oct 27 10:28:13 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node01 after 1000000 failures (max=1)
- Oct 27 10:28:13 node02 pengine[6865]: warning: common_apply_stickiness: Forcing WebSite away from node02 after 1000000 failures (max=1)
- Oct 27 10:28:13 node02 pengine[6865]: notice: stage6: Scheduling Node node02 for shutdown
- Oct 27 10:28:13 node02 pacemaker: Waiting for cluster services to unload
- Oct 27 10:28:13 node02 pengine[6865]: notice: process_pe_message: Calculated Transition 16: /var/lib/pacemaker/pengine/pe-input-124.bz2
- Oct 27 10:28:13 node02 crmd[6866]: notice: run_graph: Transition 16 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-124.bz2): Complete
- Oct 27 10:28:13 node02 crmd[6866]: notice: lrm_state_verify_stopped: Stopped 0 recurring operations at shutdown... waiting (0 ops remaining)
- Oct 27 10:28:13 node02 crmd[6866]: notice: do_lrm_control: Disconnected from the LRM
- Oct 27 10:28:13 node02 crmd[6866]: notice: terminate_cs_connection: Disconnecting from Corosync
- Oct 27 10:28:13 node02 cib[6861]: warning: qb_ipcs_event_sendv: new_event_notification (6861-6866-11): Broken pipe (32)
- Oct 27 10:28:13 node02 cib[6861]: warning: do_local_notify: A-Sync reply to crmd failed: No message of desired type
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping pengine: Sent -15 to process 6865
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping attrd: Sent -15 to process 6864
- Oct 27 10:28:13 node02 attrd[6864]: notice: main: Exiting...
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping lrmd: Sent -15 to process 6863
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping stonith-ng: Sent -15 to process 6862
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: stop_child: Stopping cib: Sent -15 to process 6861
- Oct 27 10:28:13 node02 cib[6861]: warning: qb_ipcs_event_sendv: new_event_notification (6861-6862-12): Broken pipe (32)
- Oct 27 10:28:13 node02 cib[6861]: warning: cib_notify_send_one: Notification of client crmd/ae320fe4-fd6f-45f2-b9d2-b47b146e1143 failed
- Oct 27 10:28:13 node02 cib[6861]: notice: terminate_cs_connection: Disconnecting from Corosync
- Oct 27 10:28:13 node02 cib[6861]: notice: terminate_cs_connection: Disconnecting from Corosync
- Oct 27 10:28:13 node02 pacemakerd[6855]: notice: pcmk_shutdown_worker: Shutdown complete
- Oct 27 10:28:15 node02 kernel: dlm: closing connection to node 1
- Oct 27 10:28:15 node02 kernel: dlm: closing connection to node 2
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Unloading all Corosync service engines.
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync configuration service
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync profile loading service
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync CMAN membership service 2.90
- Oct 27 10:28:15 node02 corosync[6644]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
- Oct 27 10:28:15 node02 corosync[6644]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:1947.
- Oct 27 10:28:32 node02 kernel: DLM (built Sep 9 2014 21:37:32) installed
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Corosync built-in features: nss dbus rdma snmp
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Successfully parsed cman config
- Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] Initializing transport (UDP/IP Multicast).
- Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
- Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] The network interface [192.168.1.112] is now up.
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Using quorum provider quorum_cman
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1
- Oct 27 10:28:32 node02 corosync[9648]: [CMAN ] CMAN 3.0.12.1 (built Sep 25 2014 15:07:47) started
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync CMAN membership service 2.90
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: openais checkpoint service B.01.01
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync extended virtual synchrony service
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync configuration service
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster config database access v1.01
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync profile loading service
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Using quorum provider quorum_cman
- Oct 27 10:28:32 node02 corosync[9648]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.
- Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[1]: 2
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[1]: 2
- Oct 27 10:28:32 node02 corosync[9648]: [CPG ] chosen downlist: sender r(0) ip(192.168.1.112) ; members(old:0 left:0)
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Completed service synchronization, ready to provide service.
- Oct 27 10:28:32 node02 corosync[9648]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Oct 27 10:28:32 node02 corosync[9648]: [CMAN ] quorum regained, resuming activity
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] This node is within the primary component and will provide service.
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[2]: 1 2
- Oct 27 10:28:32 node02 corosync[9648]: [QUORUM] Members[2]: 1 2
- Oct 27 10:28:32 node02 corosync[9648]: [CPG ] chosen downlist: sender r(0) ip(192.168.1.111) ; members(old:1 left:0)
- Oct 27 10:28:32 node02 corosync[9648]: [MAIN ] Completed service synchronization, ready to provide service.
- Oct 27 10:28:36 node02 fenced[9701]: fenced 3.0.12.1 started
- Oct 27 10:28:36 node02 dlm_controld[9720]: dlm_controld 3.0.12.1 started
- Oct 27 10:28:37 node02 gfs_controld[9776]: gfs_controld 3.0.12.1 started
- Oct 27 10:28:38 node02 pacemaker: Starting Pacemaker Cluster Manager
- Oct 27 10:28:38 node02 pacemakerd[9859]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 pacemakerd[9859]: notice: main: Starting Pacemaker 1.1.10-14.el6_5.3 (Build: 368c726): generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman
- Oct 27 10:28:38 node02 lrmd[9867]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 cib[9865]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 stonith-ng[9866]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 attrd[9868]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 pengine[9869]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 attrd[9868]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
- Oct 27 10:28:38 node02 stonith-ng[9866]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
- Oct 27 10:28:38 node02 crmd[9870]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Oct 27 10:28:38 node02 crmd[9870]: notice: main: CRM Git Version: 368c726
- Oct 27 10:28:38 node02 attrd[9868]: notice: main: Starting mainloop...
- Oct 27 10:28:38 node02 cib[9865]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
- Oct 27 10:28:39 node02 stonith-ng[9866]: notice: setup_cib: Watching for stonith topology changes
- Oct 27 10:28:39 node02 crmd[9870]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman
- Oct 27 10:28:39 node02 stonith-ng[9866]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Oct 27 10:28:39 node02 crmd[9870]: notice: cman_event_callback: Membership 68: quorum acquired
- Oct 27 10:28:39 node02 crmd[9870]: notice: crm_update_peer_state: cman_event_callback: Node node01[1] - state is now member (was (null))
- Oct 27 10:28:39 node02 crmd[9870]: notice: crm_update_peer_state: cman_event_callback: Node node02[2] - state is now member (was (null))
- Oct 27 10:28:39 node02 crmd[9870]: notice: do_started: The local CRM is operational
- Oct 27 10:28:39 node02 crmd[9870]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- Oct 27 10:28:41 node02 crmd[9870]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Oct 27 10:28:41 node02 attrd[9868]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Oct 27 10:28:42 node02 Filesystem(WebFS)[9881]: WARNING: Couldn't find device [/dev/drbd/by-res/wwwdata]. Expected /dev/??? to exist
- Oct 27 10:28:42 node02 apache(WebSite)[9879]: INFO: apache not running
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_monitor_0 (call=5, rc=7, cib-update=8, confirmed=true) not running
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebFS_monitor_0 (call=18, rc=7, cib-update=9, confirmed=true) not running
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebSite_monitor_0 (call=9, rc=7, cib-update=10, confirmed=true) not running
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_monitor_0 (call=14, rc=7, cib-update=11, confirmed=true) not running
- Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: Adding inet address 192.168.1.110/24 with broadcast address 192.168.1.255 to device eth0
- Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: Bringing device eth0 up
- Oct 27 10:28:42 node02 IPaddr2(ClusterIP)[10089]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.1.110 eth0 192.168.1.110 auto not_used not_used
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_start_0 (call=26, rc=0, cib-update=12, confirmed=true) ok
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: Starting worker thread (from drbdsetup-84 [10178])
- Oct 27 10:28:42 node02 kernel: block drbd1: disk( Diskless -> Attaching )
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: Method to ensure write ordering: flush
- Oct 27 10:28:42 node02 kernel: block drbd1: max BIO size = 1048576
- Oct 27 10:28:42 node02 kernel: block drbd1: drbd_bm_resize called with capacity == 2097016
- Oct 27 10:28:42 node02 kernel: block drbd1: resync bitmap: bits=262127 words=4096 pages=8
- Oct 27 10:28:42 node02 kernel: block drbd1: size = 1024 MB (1048508 KB)
- Oct 27 10:28:42 node02 kernel: block drbd1: recounting of set bits took additional 0 jiffies
- Oct 27 10:28:42 node02 kernel: block drbd1: 4 KB (1 bits) marked out-of-sync by on disk bit-map.
- Oct 27 10:28:42 node02 kernel: block drbd1: disk( Attaching -> UpToDate )
- Oct 27 10:28:42 node02 kernel: block drbd1: attached to UUIDs 57BBF4225313C9D1:C3FC762020E707F1:A741491FB4A536A4:0000000000000004
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation ClusterIP_monitor_30000 (call=29, rc=0, cib-update=13, confirmed=false) ok
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: conn( StandAlone -> Unconnected )
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: Starting receiver thread (from drbd_w_wwwdata [10180])
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: receiver (re)started
- Oct 27 10:28:42 node02 kernel: drbd wwwdata: conn( Unconnected -> WFConnection )
- Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_start_0 (call=24, rc=0, cib-update=14, confirmed=true) ok
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=33, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:42 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (1000)
- Oct 27 10:28:42 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_monitor_60000 (call=36, rc=0, cib-update=15, confirmed=false) ok
- Oct 27 10:28:43 node02 attrd[9868]: notice: attrd_perform_update: Sent update 7: master-WebData=1000
- Oct 27 10:28:43 node02 attrd[9868]: notice: attrd_perform_update: Sent update 10: probe_complete=true
- Oct 27 10:28:43 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=42, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=45, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=48, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=51, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:44 node02 kernel: block drbd1: role( Secondary -> Primary )
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_promote_0 (call=54, rc=0, cib-update=17, confirmed=true) ok
- Oct 27 10:28:44 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-WebData (10000)
- Oct 27 10:28:44 node02 attrd[9868]: notice: attrd_perform_update: Sent update 15: master-WebData=10000
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebData_notify_0 (call=57, rc=0, cib-update=0, confirmed=true) ok
- Oct 27 10:28:44 node02 Filesystem(WebFS)[10457]: INFO: Running start for /dev/drbd/by-res/wwwdata on /var/www/html
- Oct 27 10:28:44 node02 kernel: EXT4-fs (drbd1): mounted filesystem with ordered data mode. Opts:
- Oct 27 10:28:44 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebFS_start_0 (call=60, rc=0, cib-update=18, confirmed=true) ok
- Oct 27 10:28:44 node02 apache(WebSite)[10515]: ERROR: Syntax error on line 292 of /etc/httpd/conf/httpd.conf: DocumentRoot must be a directory
- Oct 27 10:28:44 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:44 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:45 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:45 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:46 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:46 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:47 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:47 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:48 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:48 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:49 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:49 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:50 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:50 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:51 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:51 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:52 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:52 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:53 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:53 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:54 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:54 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:55 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:55 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:56 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:56 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:57 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:57 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:58 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:58 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:28:59 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:28:59 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:29:00 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:29:00 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:29:01 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:29:01 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:29:02 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:29:02 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:29:03 node02 apache(WebSite)[10515]: INFO: apache not running
- Oct 27 10:29:03 node02 apache(WebSite)[10515]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
- Oct 27 10:29:04 node02 lrmd[9867]: warning: child_timeout_callback: WebSite_start_0 process (PID 10515) timed out
- Oct 27 10:29:04 node02 lrmd[9867]: warning: operation_finished: WebSite_start_0:10515 - timed out after 20000ms
- Oct 27 10:29:04 node02 crmd[9870]: error: process_lrm_event: LRM operation WebSite_start_0 (63) Timed Out (timeout=20000ms)
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-WebSite (INFINITY)
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 18: fail-count-WebSite=INFINITY
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-WebSite (1414376955)
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 21: last-failure-WebSite=1414376955
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-WebSite (INFINITY)
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 24: fail-count-WebSite=INFINITY
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_cs_dispatch: Update relayed from node01
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-WebSite (1414376955)
- Oct 27 10:29:04 node02 attrd[9868]: notice: attrd_perform_update: Sent update 27: last-failure-WebSite=1414376955
- Oct 27 10:29:04 node02 apache(WebSite)[10780]: INFO: apache is not running.
- Oct 27 10:29:04 node02 crmd[9870]: notice: process_lrm_event: LRM operation WebSite_stop_0 (call=66, rc=0, cib-update=20, confirmed=true) ok
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement