Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Nov 01 05:48:59 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30242]: 2012/11/01_05:48:59 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:48:59 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30272-17)
- Nov 01 05:48:59 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30272-17)
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:48:59 [30272] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:48:59 [30272] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:48:59 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30272-12)
- Nov 01 05:48:59 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30272-12)
- Nov 01 05:48:59 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:48:59 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:48:59 [30272] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:48:59 [30272] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:48:59 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30272-17)
- Nov 01 05:48:59 [30272] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:48:59 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30272-17) state:2
- Nov 01 05:48:59 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30272-12)
- Nov 01 05:48:59 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30272-12) state:2
- drbd[30242]: 2012/11/01_05:48:59 DEBUG: drives: Exit code 0
- drbd[30242]: 2012/11/01_05:48:59 DEBUG: drives: Command output:
- Nov 01 05:48:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30242 - exited with rc=0
- Nov 01 05:48:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30242 [ ]
- Nov 01 05:49:01 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30305-17)
- Nov 01 05:49:01 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30305-17)
- Nov 01 05:49:01 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30305-17)
- Nov 01 05:49:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30305-17) state:2
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] IPC credentials authenticated (9108-30322-31)
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] connecting to client [30322]
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:49:01 [9089] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] lib_init_fn: conn=0x7f76db4a3680
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] HUP conn (9108-30322-31)
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] qb_ipcs_disconnect(9108-30322-31) state:2
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:49:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] exit_fn for conn=0x7f76db4a3680
- Nov 01 05:49:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-9108-30322-31-header
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-9108-30322-31-header
- Nov 01 05:49:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-9108-30322-31-header
- Nov 01 05:49:05 [30353] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:05 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30353-12)
- Nov 01 05:49:05 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30353-12)
- Nov 01 05:49:05 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:49:05 [30353] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:49:05 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:49:05 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30353-12)
- Nov 01 05:49:05 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30353-12) state:2
- Nov 01 05:49:05 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30207 - exited with rc=0
- Nov 01 05:49:15 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:49:19 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30415]: 2012/11/01_05:49:19 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:49:19 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30445-17)
- Nov 01 05:49:19 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30445-17)
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:49:19 [30445] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:49:19 [30445] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:19 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30445-12)
- Nov 01 05:49:19 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30445-12)
- Nov 01 05:49:19 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:49:19 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:49:19 [30445] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:49:19 [30445] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:49:19 [30445] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:49:19 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30445-17)
- Nov 01 05:49:19 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30445-17) state:2
- Nov 01 05:49:19 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30445-12)
- Nov 01 05:49:19 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30445-12) state:2
- drbd[30415]: 2012/11/01_05:49:19 DEBUG: drives: Exit code 0
- drbd[30415]: 2012/11/01_05:49:19 DEBUG: drives: Command output:
- Nov 01 05:49:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30415 - exited with rc=0
- Nov 01 05:49:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30415 [ ]
- Nov 01 05:49:22 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:49:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30468 - exited with rc=0
- Nov 01 05:49:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30468 [ nfsd running ]
- Nov 01 05:49:25 [30475] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:25 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30475-12)
- Nov 01 05:49:25 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30475-12)
- Nov 01 05:49:25 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:49:25 [30475] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:49:25 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:49:25 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30475-12)
- Nov 01 05:49:25 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30475-12) state:2
- Nov 01 05:49:25 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30380 - exited with rc=0
- Nov 01 05:49:35 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:49:39 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30540]: 2012/11/01_05:49:39 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:49:39 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30570-17)
- Nov 01 05:49:39 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30570-17)
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:49:39 [30570] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:49:39 [30570] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:39 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30570-12)
- Nov 01 05:49:39 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30570-12)
- Nov 01 05:49:39 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:49:39 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:49:39 [30570] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:49:39 [30570] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:49:39 [30570] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:49:39 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30570-17)
- Nov 01 05:49:39 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30570-17) state:2
- Nov 01 05:49:39 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30570-12)
- Nov 01 05:49:39 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30570-12) state:2
- drbd[30540]: 2012/11/01_05:49:39 DEBUG: drives: Exit code 0
- drbd[30540]: 2012/11/01_05:49:39 DEBUG: drives: Command output:
- Nov 01 05:49:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30540 - exited with rc=0
- Nov 01 05:49:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30540 [ ]
- Nov 01 05:49:45 [30598] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:45 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30598-12)
- Nov 01 05:49:45 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30598-12)
- Nov 01 05:49:45 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:49:45 [30598] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:49:45 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:49:45 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30598-12)
- Nov 01 05:49:45 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30598-12) state:2
- Nov 01 05:49:45 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30505 - exited with rc=0
- Nov 01 05:49:52 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:49:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30625 - exited with rc=0
- Nov 01 05:49:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30625 [ nfsd running ]
- Nov 01 05:49:55 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:49:59 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30662]: 2012/11/01_05:49:59 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:49:59 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30692-17)
- Nov 01 05:49:59 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30692-17)
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:49:59 [30692] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:49:59 [30692] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:49:59 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30692-12)
- Nov 01 05:49:59 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30692-12)
- Nov 01 05:49:59 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:49:59 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:49:59 [30692] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:49:59 [30692] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:49:59 [30692] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:49:59 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30692-17)
- Nov 01 05:49:59 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30692-17) state:2
- Nov 01 05:49:59 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30692-12)
- Nov 01 05:49:59 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30692-12) state:2
- drbd[30662]: 2012/11/01_05:49:59 DEBUG: drives: Exit code 0
- drbd[30662]: 2012/11/01_05:49:59 DEBUG: drives: Command output:
- Nov 01 05:49:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30662 - exited with rc=0
- Nov 01 05:49:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30662 [ ]
- Nov 01 05:50:01 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30712-17)
- Nov 01 05:50:01 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30712-17)
- Nov 01 05:50:01 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30712-17)
- Nov 01 05:50:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30712-17) state:2
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] IPC credentials authenticated (9108-30735-31)
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] connecting to client [30735]
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:50:01 [9089] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] lib_init_fn: conn=0x7f76db4897e0
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] HUP conn (9108-30735-31)
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] qb_ipcs_disconnect(9108-30735-31) state:2
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:50:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] exit_fn for conn=0x7f76db4897e0
- Nov 01 05:50:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-9108-30735-31-header
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-9108-30735-31-header
- Nov 01 05:50:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-9108-30735-31-header
- Nov 01 05:50:05 [30773] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:05 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30773-12)
- Nov 01 05:50:05 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30773-12)
- Nov 01 05:50:05 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:50:05 [30773] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:50:05 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:50:05 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30773-12)
- Nov 01 05:50:05 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30773-12) state:2
- Nov 01 05:50:05 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30627 - exited with rc=0
- Nov 01 05:50:15 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:50:19 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30835]: 2012/11/01_05:50:19 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:50:19 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30865-17)
- Nov 01 05:50:19 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30865-17)
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:50:19 [30865] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:50:19 [30865] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:19 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30865-12)
- Nov 01 05:50:19 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30865-12)
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:50:19 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:50:19 [30865] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:50:19 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:50:19 [30865] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:50:19 [30865] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:50:19 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30865-17)
- Nov 01 05:50:19 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30865-17) state:2
- Nov 01 05:50:19 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30865-12)
- Nov 01 05:50:19 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30865-12) state:2
- drbd[30835]: 2012/11/01_05:50:19 DEBUG: drives: Exit code 0
- drbd[30835]: 2012/11/01_05:50:19 DEBUG: drives: Command output:
- Nov 01 05:50:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30835 - exited with rc=0
- Nov 01 05:50:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30835 [ ]
- Nov 01 05:50:22 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:50:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30888 - exited with rc=0
- Nov 01 05:50:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:30888 [ nfsd running ]
- Nov 01 05:50:25 [30895] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:25 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30895-12)
- Nov 01 05:50:25 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30895-12)
- Nov 01 05:50:25 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:50:25 [30895] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:50:25 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:50:25 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30895-12)
- Nov 01 05:50:25 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30895-12) state:2
- Nov 01 05:50:25 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30800 - exited with rc=0
- Nov 01 05:50:35 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:50:39 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[30961]: 2012/11/01_05:50:39 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:50:39 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-30991-17)
- Nov 01 05:50:39 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-30991-17)
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:50:39 [30991] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:50:39 [30991] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:39 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-30991-12)
- Nov 01 05:50:39 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-30991-12)
- Nov 01 05:50:39 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:50:39 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:50:39 [30991] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:50:39 [30991] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:50:39 [30991] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:50:39 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-30991-17)
- Nov 01 05:50:39 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-30991-17) state:2
- Nov 01 05:50:39 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-30991-12)
- Nov 01 05:50:39 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-30991-12) state:2
- drbd[30961]: 2012/11/01_05:50:39 DEBUG: drives: Exit code 0
- drbd[30961]: 2012/11/01_05:50:39 DEBUG: drives: Command output:
- Nov 01 05:50:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30961 - exited with rc=0
- Nov 01 05:50:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:30961 [ ]
- Nov 01 05:50:45 [31019] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:45 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31019-12)
- Nov 01 05:50:45 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31019-12)
- Nov 01 05:50:45 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:50:45 [31019] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:50:45 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:50:45 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31019-12)
- Nov 01 05:50:45 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31019-12) state:2
- Nov 01 05:50:45 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:30925 - exited with rc=0
- Nov 01 05:50:52 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:50:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31046 - exited with rc=0
- Nov 01 05:50:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31046 [ nfsd running ]
- Nov 01 05:50:55 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:50:59 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31083]: 2012/11/01_05:50:59 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:50:59 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31113-17)
- Nov 01 05:50:59 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31113-17)
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:50:59 [31113] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:50:59 [31113] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:50:59 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31113-12)
- Nov 01 05:50:59 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31113-12)
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:50:59 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:50:59 [31113] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:50:59 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:50:59 [31113] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:50:59 [31113] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:50:59 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31113-17)
- Nov 01 05:50:59 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31113-17) state:2
- Nov 01 05:50:59 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31113-12)
- Nov 01 05:50:59 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31113-12) state:2
- drbd[31083]: 2012/11/01_05:50:59 DEBUG: drives: Exit code 0
- drbd[31083]: 2012/11/01_05:50:59 DEBUG: drives: Command output:
- Nov 01 05:50:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31083 - exited with rc=0
- Nov 01 05:50:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31083 [ ]
- Nov 01 05:51:01 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31133-17)
- Nov 01 05:51:01 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31133-17)
- Nov 01 05:51:01 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31133-17)
- Nov 01 05:51:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31133-17) state:2
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] IPC credentials authenticated (9108-31150-31)
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] connecting to client [31150]
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:51:01 [9089] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] lib_init_fn: conn=0x7f76db4897e0
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] HUP conn (9108-31150-31)
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] qb_ipcs_disconnect(9108-31150-31) state:2
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:51:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] exit_fn for conn=0x7f76db4897e0
- Nov 01 05:51:01 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-9108-31150-31-header
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-9108-31150-31-header
- Nov 01 05:51:01 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-9108-31150-31-header
- Nov 01 05:51:05 [31194] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:05 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31194-12)
- Nov 01 05:51:05 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31194-12)
- Nov 01 05:51:05 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:51:05 [31194] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:51:05 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:51:05 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31194-12)
- Nov 01 05:51:05 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31194-12) state:2
- Nov 01 05:51:05 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31048 - exited with rc=0
- Nov 01 05:51:15 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:51:19 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31256]: 2012/11/01_05:51:19 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:51:19 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31286-17)
- Nov 01 05:51:19 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31286-17)
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:51:19 [31286] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:51:19 [31286] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:19 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31286-12)
- Nov 01 05:51:19 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31286-12)
- Nov 01 05:51:19 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:51:19 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:51:19 [31286] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:51:19 [31286] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:51:19 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31286-17)
- Nov 01 05:51:19 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31286-17) state:2
- Nov 01 05:51:19 [31286] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:51:19 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31286-12)
- Nov 01 05:51:19 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31286-12) state:2
- drbd[31256]: 2012/11/01_05:51:19 DEBUG: drives: Exit code 0
- drbd[31256]: 2012/11/01_05:51:19 DEBUG: drives: Command output:
- Nov 01 05:51:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31256 - exited with rc=0
- Nov 01 05:51:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31256 [ ]
- Nov 01 05:51:22 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:51:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31309 - exited with rc=0
- Nov 01 05:51:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31309 [ nfsd running ]
- Nov 01 05:51:25 [31316] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:25 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31316-12)
- Nov 01 05:51:25 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31316-12)
- Nov 01 05:51:25 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:51:25 [31316] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:51:25 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:51:25 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31316-12)
- Nov 01 05:51:25 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31316-12) state:2
- Nov 01 05:51:25 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31221 - exited with rc=0
- Nov 01 05:51:35 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:51:39 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31381]: 2012/11/01_05:51:39 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:51:39 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31411-17)
- Nov 01 05:51:39 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31411-17)
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:51:39 [31411] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:51:39 [31411] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:39 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31411-12)
- Nov 01 05:51:39 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31411-12)
- Nov 01 05:51:39 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:51:39 [31411] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:51:39 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:51:39 [31411] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:51:39 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31411-17)
- Nov 01 05:51:39 [31411] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:51:39 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31411-17) state:2
- Nov 01 05:51:39 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31411-12)
- Nov 01 05:51:39 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31411-12) state:2
- drbd[31381]: 2012/11/01_05:51:39 DEBUG: drives: Exit code 0
- drbd[31381]: 2012/11/01_05:51:39 DEBUG: drives: Command output:
- Nov 01 05:51:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31381 - exited with rc=0
- Nov 01 05:51:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31381 [ ]
- Nov 01 05:51:45 [31439] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:45 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31439-12)
- Nov 01 05:51:45 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31439-12)
- Nov 01 05:51:45 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:51:45 [31439] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:51:45 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:51:45 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31439-12)
- Nov 01 05:51:45 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31439-12) state:2
- Nov 01 05:51:45 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31346 - exited with rc=0
- Nov 01 05:51:52 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:51:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31453 - exited with rc=0
- Nov 01 05:51:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31453 [ nfsd running ]
- Nov 01 05:51:55 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:51:59 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31503]: 2012/11/01_05:51:59 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:51:59 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31533-17)
- Nov 01 05:51:59 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31533-17)
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:51:59 [31533] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:51:59 [31533] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:51:59 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31533-12)
- Nov 01 05:51:59 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31533-12)
- Nov 01 05:51:59 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:51:59 [31533] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:51:59 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:51:59 [31533] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:51:59 [31533] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:51:59 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31533-17)
- Nov 01 05:51:59 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31533-17) state:2
- Nov 01 05:51:59 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31533-12)
- Nov 01 05:51:59 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31533-12) state:2
- drbd[31503]: 2012/11/01_05:51:59 DEBUG: drives: Exit code 0
- drbd[31503]: 2012/11/01_05:51:59 DEBUG: drives: Command output:
- Nov 01 05:51:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31503 - exited with rc=0
- Nov 01 05:51:59 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31503 [ ]
- Nov 01 05:52:01 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31553-17)
- Nov 01 05:52:01 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31553-17)
- Nov 01 05:52:01 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31553-17)
- Nov 01 05:52:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31553-17) state:2
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] IPC credentials authenticated (9108-31570-31)
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] connecting to client [31570]
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:52:02 [9089] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] lib_init_fn: conn=0x7f76db4897e0
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] HUP conn (9108-31570-31)
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] qb_ipcs_disconnect(9108-31570-31) state:2
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:52:02 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] exit_fn for conn=0x7f76db4897e0
- Nov 01 05:52:02 [9089] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-9108-31570-31-header
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-9108-31570-31-header
- Nov 01 05:52:02 [9089] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-9108-31570-31-header
- Nov 01 05:52:05 [31614] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:52:05 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31614-12)
- Nov 01 05:52:05 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31614-12)
- Nov 01 05:52:05 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:52:05 [31614] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:52:05 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:52:05 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31614-12)
- Nov 01 05:52:05 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31614-12) state:2
- Nov 01 05:52:05 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31468 - exited with rc=0
- Nov 01 05:52:15 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:52:19 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31676]: 2012/11/01_05:52:19 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:52:19 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31706-17)
- Nov 01 05:52:19 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31706-17)
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:52:19 [31706] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:52:19 [31706] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:52:19 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31706-12)
- Nov 01 05:52:19 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31706-12)
- Nov 01 05:52:19 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:52:19 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:52:19 [31706] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:52:19 [31706] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:52:19 [31706] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:52:19 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31706-17)
- Nov 01 05:52:19 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31706-17) state:2
- Nov 01 05:52:19 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31706-12)
- Nov 01 05:52:19 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31706-12) state:2
- drbd[31676]: 2012/11/01_05:52:19 DEBUG: drives: Exit code 0
- drbd[31676]: 2012/11/01_05:52:19 DEBUG: drives: Command output:
- Nov 01 05:52:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31676 - exited with rc=0
- Nov 01 05:52:19 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31676 [ ]
- Nov 01 05:52:22 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:52:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31716 - exited with rc=0
- Nov 01 05:52:22 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31716 [ nfsd running ]
- Nov 01 05:52:25 [31736] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:52:25 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31736-12)
- Nov 01 05:52:25 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31736-12)
- Nov 01 05:52:25 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:52:25 [31736] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:52:25 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:52:25 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31736-12)
- Nov 01 05:52:25 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31736-12) state:2
- Nov 01 05:52:25 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31641 - exited with rc=0
- Nov 01 05:52:35 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:52:39 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31801]: 2012/11/01_05:52:39 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:52:39 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31831-17)
- Nov 01 05:52:39 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31831-17)
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:52:39 [31831] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:52:39 [31831] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:52:39 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31831-12)
- Nov 01 05:52:39 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31831-12)
- Nov 01 05:52:39 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:52:39 [31831] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:52:39 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:52:39 [31831] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:52:39 [31831] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:52:39 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31831-17)
- Nov 01 05:52:39 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31831-17) state:2
- Nov 01 05:52:39 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31831-12)
- Nov 01 05:52:39 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31831-12) state:2
- drbd[31801]: 2012/11/01_05:52:39 DEBUG: drives: Exit code 0
- drbd[31801]: 2012/11/01_05:52:39 DEBUG: drives: Command output:
- Nov 01 05:52:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31801 - exited with rc=0
- Nov 01 05:52:39 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31801 [ ]
- Nov 01 05:52:45 [31859] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:52:45 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31859-12)
- Nov 01 05:52:45 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31859-12)
- Nov 01 05:52:45 [9139] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:52:45 [31859] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:52:45 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: 5000, Stored: 5000
- Nov 01 05:52:45 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31859-12)
- Nov 01 05:52:45 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31859-12) state:2
- Nov 01 05:52:45 [9138] storage1 lrmd: debug: operation_finished: p_ping_monitor_10000:31766 - exited with rc=0
- Nov 01 05:52:52 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:52:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31873 - exited with rc=0
- Nov 01 05:52:52 [9138] storage1 lrmd: debug: operation_finished: p_daemon_nfs-kernel-server_status_30000:31873 [ nfsd running ]
- Nov 01 05:52:55 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_ping_monitor_10000
- Nov 01 05:52:59 [9138] storage1 lrmd: debug: recurring_action_timer: Scheduling another invokation of p_drbd_drives_monitor_20000
- drbd[31923]: 2012/11/01_05:53:00 DEBUG: drives: Calling /usr/sbin/crm_master -Q -l reboot -v 10000
- Nov 01 05:53:00 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31953-17)
- Nov 01 05:53:00 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31953-17)
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <nodes >
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="402732042" uname="storagequorum" >
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <instance_attributes id="storagequorum-instance_attributes" >
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <nvpair id="storagequorum-instance_attributes-standby" name="standby" value="on" />
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section </instance_attributes>
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section </node>
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2499884042" uname="storage1" />
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="2483106826" uname="storage0" />
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section <node id="16777343" uname="localhost" />
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: query_node_uuid: Result section </nodes>
- Nov 01 05:53:00 [31953] storage1 crm_attribute: info: determine_host: Mapped storage1 to 2499884042
- Nov 01 05:53:00 [31953] storage1 crm_attribute: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:53:00 [9139] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (9139-31953-12)
- Nov 01 05:53:00 [9139] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (9139-31953-12)
- Nov 01 05:53:00 [9139] storage1 attrd: debug: attrd_local_callback: update message from crm_attribute: master-p_drbd_drives=10000
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: attrd_update_delegate: Sent update: master-p_drbd_drives=10000 for storage1
- Nov 01 05:53:00 [9139] storage1 attrd: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000
- Nov 01 05:53:00 [31953] storage1 crm_attribute: info: main: Update master-p_drbd_drives=10000 sent via attrd
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:00 [31953] storage1 crm_attribute: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:00 [31953] storage1 crm_attribute: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:53:00 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31953-17)
- Nov 01 05:53:00 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31953-17) state:2
- Nov 01 05:53:00 [9139] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9139-31953-12)
- Nov 01 05:53:00 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-31953-12) state:2
- drbd[31923]: 2012/11/01_05:53:00 DEBUG: drives: Exit code 0
- drbd[31923]: 2012/11/01_05:53:00 DEBUG: drives: Command output:
- Nov 01 05:53:00 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31923 - exited with rc=0
- Nov 01 05:53:00 [9138] storage1 lrmd: debug: operation_finished: p_drbd_drives_monitor_20000:31923 [ ]
- Nov 01 05:53:01 [9135] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (9135-31970-17)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (9135-31970-17)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (9135-31970-17)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-31970-17) state:2
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: qb_ipc_us_ready: poll(fd 7) got POLLHUP
- Nov 01 05:53:01 [9139] storage1 attrd: debug: qb_ipc_us_ready: poll(fd 6) got POLLHUP
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: qb_ipc_us_ready: poll(fd 6) got POLLHUP
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9139] storage1 attrd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 8) got POLLHUP
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipc_us_ready: poll(fd 6) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9139] storage1 attrd: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Nov 01 05:53:01 [9135] storage1 cib: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: cfg_connection_destroy: Connection destroyed
- Nov 01 05:53:01 [9141] storage1 crmd: info: crmd_quorum_destroy: connection closed
- Nov 01 05:53:01 [9137] storage1 stonith-ng: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Nov 01 05:53:01 [9139] storage1 attrd: crit: attrd_ais_destroy: Lost connection to Corosync service!
- Nov 01 05:53:01 [9135] storage1 cib: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: qb_ipc_us_ready: poll(fd 8) got POLLHUP
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 7) got POLLHUP
- Nov 01 05:53:01 [9137] storage1 stonith-ng: error: stonith_peer_ais_destroy: AIS connection terminated
- Nov 01 05:53:01 [9139] storage1 attrd: notice: main: Exiting...
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9135] storage1 cib: error: cib_ais_destroy: Corosync connection lost! Exiting.
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: cpg_connection_destroy: Connection destroyed
- Nov 01 05:53:01 [9137] storage1 stonith-ng: info: stonith_shutdown: Terminating with 2 clients
- Nov 01 05:53:01 [9135] storage1 cib: debug: uninitializeCib: Deallocating the CIB.
- Nov 01 05:53:01 [9139] storage1 attrd: notice: main: Disconnecting client 0x24e0b00, pid=9141...
- Nov 01 05:53:01 [9133] storage1 pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker
- Nov 01 05:53:01 [9141] storage1 crmd: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
- Nov 01 05:53:01 [9139] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9139-9141-10) state:2
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:01 [9141] storage1 crmd: info: crmd_ais_destroy: connection closed
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9133] storage1 pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 9141
- Nov 01 05:53:01 [9141] storage1 crmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 01 05:53:01 [9139] storage1 attrd: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9139] storage1 attrd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9139] storage1 attrd: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:01 [9141] storage1 crmd: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9137-9138-13) state:2
- Nov 01 05:53:01 [9139] storage1 attrd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: crm_timer_start: Started Shutdown Escalation (I_STOP:1200000ms), src=85
- Nov 01 05:53:01 [9141] storage1 crmd: debug: s_crmd_fsa: Processing I_SHUTDOWN: [ state=S_NOT_DC cause=C_SHUTDOWN origin=crm_shutdown ]
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_shutdown_req: Sending shutdown request to storagequorum
- Nov 01 05:53:01 [9137] storage1 stonith-ng: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9137-9141-10) state:2
- Nov 01 05:53:01 [9139] storage1 attrd: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:01 [9139] storage1 attrd: error: attrd_cib_connection_destroy: Connection to the CIB terminated...
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9137] storage1 stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9137] storage1 stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:53:01 [9141] storage1 crmd: error: send_ais_text: Sending message 63 via cpg: FAILED (rc=2): Library error: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: s_crmd_fsa: Processing I_ERROR: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_shutdown_req ]
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipc_us_ready: poll(fd 10) got POLLHUP
- Nov 01 05:53:01 [9137] storage1 stonith-ng: info: main: Done
- Nov 01 05:53:01 [9141] storage1 crmd: error: do_log: FSA: Input I_ERROR from do_shutdown_req() received in state S_NOT_DC
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipc_us_ready: poll(fd 9) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: notice: do_state_transition: State transition S_NOT_DC -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=do_shutdown_req ]
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: error: do_recover: Action A_RECOVER (0000000001000000) not supported
- Nov 01 05:53:01 [9141] storage1 crmd: debug: s_crmd_fsa: Processing I_TERMINATE: [ state=S_RECOVERY cause=C_FSA_INTERNAL origin=do_recover ]
- Nov 01 05:53:01 [9138] storage1 lrmd: error: crm_ipc_read: Connection to stonith-ng failed
- Nov 01 05:53:01 [9141] storage1 crmd: error: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_shutdown: Disconnecting STONITH...
- Nov 01 05:53:01 [9138] storage1 lrmd: error: mainloop_gio_callback: Connection to stonith-ng[0x2410080] closed (I/O condition=17)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: stonith_api_signoff: Signing out of the STONITH Service
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipc_us_ready: poll(fd 10) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 12) got POLLHUP
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: get_xpath_object: No match for //st_notify_disconnect in /notify
- Nov 01 05:53:01 [9141] storage1 crmd: debug: get_xpath_object: No match for //st_notify_disconnect in /notify
- Nov 01 05:53:01 [9141] storage1 crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
- Nov 01 05:53:01 [9138] storage1 lrmd: error: stonith_connection_destroy_cb: LRMD lost STONITH connection
- Nov 01 05:53:01 [9141] storage1 crmd: debug: verify_stopped: Checking for active resources before exit
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Cancelling op 94 for p_drbd_drives (p_drbd_drives:94)
- Nov 01 05:53:01 [9135] storage1 cib: debug: uninitializeCib: The CIB has been deallocated.
- Nov 01 05:53:01 [9135] storage1 cib: info: terminate_cib: cib_ais_destroy: Exiting fast...
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9135] storage1 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: pcmk_child_exit: Child process attrd exited (pid=9139, rc=1)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-9139-15) state:2
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000110312 (was 00000000000000000000000000111312)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-9137-13) state:2
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9135] storage1 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9135-9141-11) state:2
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_shm-response-9135-9141-11-header
- Nov 01 05:53:01 [9138] storage1 lrmd: info: cancel_recurring_action: Cancelling operation p_drbd_drives_monitor_20000
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_shm-event-9135-9141-11-header
- Nov 01 05:53:01 [9133] storage1 pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=9137, rc=0)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000010312 (was 00000000000000000000000000110312)
- Nov 01 05:53:01 [9135] storage1 cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_shm-request-9135-9141-11-header
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: process_lrmd_message: Processed lrmd_rsc_cancel operation from ef19f2ec-455a-4908-9652-a34c6feeed99: rc=0, reply=1, notify=0, exit=0
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9135] storage1 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Op 94 for p_drbd_drives (p_drbd_drives:94): cancelled
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Cancelling op 98 for p_daemon_nfs-kernel-server (p_daemon_nfs-kernel-server:98)
- Nov 01 05:53:01 [9138] storage1 lrmd: info: cancel_recurring_action: Cancelling operation p_daemon_nfs-kernel-server_status_30000
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: process_lrmd_message: Processed lrmd_rsc_cancel operation from ef19f2ec-455a-4908-9652-a34c6feeed99: rc=0, reply=1, notify=0, exit=0
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Op 98 for p_daemon_nfs-kernel-server (p_daemon_nfs-kernel-server:98): cancelled
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Cancelling op 96 for p_ping (p_ping:96)
- Nov 01 05:53:01 [9138] storage1 lrmd: info: services_action_cancel: Cancelling op: p_ping_monitor_10000 will occur once operation completes
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: process_lrmd_message: Processed lrmd_rsc_cancel operation from ef19f2ec-455a-4908-9652-a34c6feeed99: rc=0, reply=1, notify=0, exit=0
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cancel_op: Op 96 for p_ping (p_ping:96): cancelled
- Nov 01 05:53:01 [9141] storage1 crmd: error: verify_stopped: Resource stonithstorage0 was active at shutdown. You may ignore this error if it is unmanaged.
- Nov 01 05:53:01 [9141] storage1 crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-lrmd-request-9138-9141-7-header
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipcs_dispatch_connection_request: HUP conn (9138-9141-7)
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(9138-9141-7) state:2
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-lrmd-response-9138-9141-7-header
- Nov 01 05:53:01 [9138] storage1 lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0x2403c00 - name: crmd id: ef19f2ec-455a-4908-9652-a34c6feeed99
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-lrmd-event-9138-9141-7-header
- Nov 01 05:53:01 [9138] storage1 lrmd: info: services_action_cancel: Cancelling op: p_ping_monitor_10000 will occur once operation completes
- Nov 01 05:53:01 [9141] storage1 crmd: info: lrmd_connection_destroy: connection destroyed
- Nov 01 05:53:01 [9141] storage1 crmd: info: lrm_connection_destroy: LRM Connection disconnected
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_lrm_control: Disconnected from the LRM
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-lrmd-response-9138-9141-7-header
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-lrmd-event-9138-9141-7-header
- Nov 01 05:53:01 [9141] storage1 crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-lrmd-request-9138-9141-7-header
- Nov 01 05:53:01 [9141] storage1 crmd: notice: terminate_cs_connection: Disconnecting from Corosync
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 7) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cpg-request-9108-9141-29-header
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cpg-response-9108-9141-29-header
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: pcmk_child_exit: Child process cib exited (pid=9135, rc=64)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cpg-event-9108-9141-29-header
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000010212 (was 00000000000000000000000000010312)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 8) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-quorum-request-9108-9141-30-header
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-quorum-response-9108-9141-30-header
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-quorum-event-9108-9141-30-header
- Nov 01 05:53:01 [9141] storage1 crmd: info: crm_cluster_disconnect: Disconnected from corosync
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_ha_control: Disconnected from the cluster
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_cib_control: Disconnecting CIB
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cib_client_del_notify_callback: Removing callback for cib_diff_notify events
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 6) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: notice: crm_ipc_send: Connection to cib_shm closed
- Nov 01 05:53:01 [9141] storage1 crmd: notice: crm_ipc_send: Connection to cib_shm closed
- Nov 01 05:53:01 [9141] storage1 crmd: error: cib_native_perform_op_delegate: Couldn't perform cib_slave operation (timeout=120s): -107: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: error: cib_native_perform_op_delegate: CIB disconnected
- Nov 01 05:53:01 [9141] storage1 crmd: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 6) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cib_shm-request-9135-9141-11-header
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cib_shm-response-9135-9141-11-header
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_rb_force_close: Force free'ing ringbuffer: /dev/shm/qb-cib_shm-event-9135-9141-11-header
- Nov 01 05:53:01 [9141] storage1 crmd: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9141] storage1 crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9141] storage1 crmd: debug: verify_stopped: Checking for active resources before exit
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
- Nov 01 05:53:01 [9141] storage1 crmd: error: do_exit: Could not recover from internal error
- Nov 01 05:53:01 [9141] storage1 crmd: info: do_exit: [crmd] stopped (2)
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:01 [9141] storage1 crmd: debug: qb_ipc_us_ready: poll(fd 15) got POLLHUP
- Nov 01 05:53:01 [9141] storage1 crmd: debug: _check_connection_state: interpreting result -107 as a disconnect: Transport endpoint is not connected (107)
- Nov 01 05:53:01 [9141] storage1 crmd: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
- Nov 01 05:53:01 [9141] storage1 crmd: debug: free_mem: Number of connected clients: 0
- Nov 01 05:53:01 [9141] storage1 crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
- Nov 01 05:53:01 [9141] storage1 crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: pcmk_child_exit: Child process crmd exited (pid=9141, rc=2)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000010012 (was 00000000000000000000000000010212)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: crmd confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 9140
- Nov 01 05:53:01 [9140] storage1 pengine: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 01 05:53:01 [9140] storage1 pengine: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9140] storage1 pengine: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9133] storage1 pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=9140, rc=0)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000000012 (was 00000000000000000000000000010012)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: pengine confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: attrd confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 9138
- Nov 01 05:53:01 [9138] storage1 lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
- Nov 01 05:53:01 [9138] storage1 lrmd: info: lrmd_shutdown: Terminating with 0 clients
- Nov 01 05:53:01 [9138] storage1 lrmd: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9138] storage1 lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9133] storage1 pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=9138, rc=0)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000012)
- Nov 01 05:53:01 [9133] storage1 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: lrmd confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: stonith-ng confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: pcmk_shutdown_worker: cib confirmed stopped
- Nov 01 05:53:01 [9133] storage1 pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete
- Nov 01 05:53:01 [9133] storage1 pacemakerd: debug: qb_ipcs_unref: qb_ipcs_unref() - destroying
- Nov 01 05:53:01 [9133] storage1 pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets
- Nov 01 05:53:01 [9133] storage1 pacemakerd: info: main: Exiting pacemakerd
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] Token Timeout (1000 ms) retransmit timeout (238 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] token hold (180 ms) retransmits before loss (4 retrans)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] downcheck (1000 ms) fail to recv const (2500 msgs)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] seqno unchanged const (30 rotations) Maximum network MTU 1401
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] window size per rotation (50 messages) maximum messages per rotation (17 messages)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] missed count const (5 messages)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] send threads (0 threads)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP token expired timeout (238 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP token problem counter (2000 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP threshold (10 problem count)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP multicast threshold (100 problem count)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP automatic recovery check timeout (1000 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] RRP mode set to active.
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] heartbeat_failures_allowed (0)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] max_network_delay (50 ms)
- Nov 01 05:53:02 [32038] storage1 corosync debug [TOTEM ] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
- Nov 01 05:53:02 [32038] storage1 corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
- Nov 01 05:53:02 [32038] storage1 corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
- Nov 01 05:53:02 [32038] storage1 corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
- Nov 01 05:53:02 [32038] storage1 corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Receive multicast socket recv buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Transmit multicast socket send buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Local receive multicast loop socket recv buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Local transmit multicast loop socket send buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync notice [TOTEM ] The network interface [10.52.1.149] is now up.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Created or loaded sequence id 1c19f8.10.52.1.149 for this ring.
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Initializing IPC on cmap [0]
- Nov 01 05:53:03 [32038] storage1 corosync info [QB ] server name: cmap
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync configuration service [1]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Initializing IPC on cfg [1]
- Nov 01 05:53:03 [32038] storage1 corosync info [QB ] server name: cfg
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Initializing IPC on cpg [2]
- Nov 01 05:53:03 [32038] storage1 corosync info [QB ] server name: cpg
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync profile loading service [4]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] NOT Initializing IPC on pload [4]
- Nov 01 05:53:03 [32038] storage1 corosync notice [QUORUM] Using quorum provider corosync_votequorum
- Nov 01 05:53:03 [32038] storage1 corosync debug [QUORUM] Reading configuration (runtime: 0)
- Nov 01 05:53:03 [32038] storage1 corosync debug [QUORUM] No nodelist defined or our node is not in the nodelist
- Nov 01 05:53:03 [32038] storage1 corosync debug [QUORUM] total_votes=1, expected_votes=3
- Nov 01 05:53:03 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:03 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Initializing IPC on votequorum [5]
- Nov 01 05:53:03 [32038] storage1 corosync info [QB ] server name: votequorum
- Nov 01 05:53:03 [32038] storage1 corosync notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Initializing IPC on quorum [3]
- Nov 01 05:53:03 [32038] storage1 corosync info [QB ] server name: quorum
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Receive multicast socket recv buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Transmit multicast socket send buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Local receive multicast loop socket recv buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Local transmit multicast loop socket send buffer size (262142 bytes).
- Nov 01 05:53:03 [32038] storage1 corosync notice [TOTEM ] The network interface [192.168.7.149] is now up.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] entering GATHER state from 15.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Creating commit token because I am the rep.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Saving state aru 0 high seq received 0
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Storing new sequence id for ring 1c19fc
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] entering COMMIT state.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] entering RECOVERY state.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] position [0] member 10.52.1.149:
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] previous ring seq 1c19f8 rep 10.52.1.149
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] aru 0 high delivered 0 received flag 1
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Did not need to originate any messages in recovery.
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:03 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Denied connection, corosync is not ready
- Nov 01 05:53:03 [32038] storage1 corosync error [QB ] Error in connection setup (32039-32046-24): Resource temporarily unavailable (11)
- Nov 01 05:53:03 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32046-24) state:0
- Nov 01 05:53:03 [32038] storage1 corosync debug [MAIN ] Denied connection, corosync is not ready
- Nov 01 05:53:03 [32038] storage1 corosync error [QB ] Error in connection setup (32039-32048-25): Resource temporarily unavailable (11)
- Nov 01 05:53:03 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32048-25) state:0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] The token was lost in the RECOVERY state.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Restoring instance->my_aru 0 my high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering GATHER state from 5.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Creating commit token because I am the rep.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Storing new sequence id for ring 1c1a00
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering COMMIT state.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering RECOVERY state.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] position [0] member 10.52.1.149:
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] previous ring seq 1c19f8 rep 10.52.1.149
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] aru 0 high delivered 0 received flag 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Did not need to originate any messages in recovery.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Sending initial ORF token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Resetting old ring state
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] recovery to regular 1-0
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] Member joined: r(0) ip(10.52.1.149) r(1) ip(192.168.7.149)
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync notice [QUORUM] Members[1]: -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering OPERATIONAL state.
- Nov 01 05:53:04 [32038] storage1 corosync notice [TOTEM ] A processor joined or left the membership and a new membership (10.52.1.149:1841664) was formed.
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2499884042]: votes: 1, expected: 3 flags: 8
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=1, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2499884042]: votes: 1, expected: 3 flags: 8
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=1, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [SYNC ] Committing synchronization for corosync configuration map access
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] Single node sync -> no action
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] comparing: sender r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ; members(old:0 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] chosen downlist: sender r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ; members(old:0 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [SYNC ] Committing synchronization for corosync cluster closed process group service v1.01
- Nov 01 05:53:04 [32038] storage1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering GATHER state from 11.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Saving state aru 5 high seq received 5
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Storing new sequence id for ring 1c1a04
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering COMMIT state.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering RECOVERY state.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] TRANS [0] member 10.52.1.149:
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] position [0] member 10.52.1.24:
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] previous ring seq 1c19fc rep 10.52.1.24
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] aru d high delivered d received flag 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] position [1] member 10.52.1.148:
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] previous ring seq 1c19fc rep 10.52.1.24
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] aru d high delivered d received flag 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] position [2] member 10.52.1.149:
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] previous ring seq 1c1a00 rep 10.52.1.149
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] aru 5 high delivered 5 received flag 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Did not need to originate any messages in recovery.
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] got commit token
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru ffffffff
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] install seq 0 aru 0 high seq received 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] Resetting old ring state
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] recovery to regular 1-0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] Member joined: r(0) ip(10.52.1.24) r(1) ip(192.168.7.24)
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] Member joined: r(0) ip(10.52.1.148) r(1) ip(192.168.7.148)
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync notice [QUORUM] Members[3]: 402732042 -1811860470 -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [TOTEM ] entering OPERATIONAL state.
- Nov 01 05:53:04 [32038] storage1 corosync notice [TOTEM ] A processor joined or left the membership and a new membership (10.52.1.24:1841668) was formed.
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2483106826
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2483106826]: votes: 1, expected: 3 flags: 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=2, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2483106826 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] lowest node id: -1811860470 us: -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] quorum regained, resuming activity
- Nov 01 05:53:04 [32038] storage1 corosync notice [QUORUM] This node is within the primary component and will provide service.
- Nov 01 05:53:04 [32038] storage1 corosync notice [QUORUM] Members[3]: 402732042 -1811860470 -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2483106826
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2483106826
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2483106826]: votes: 1, expected: 3 flags: 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2483106826
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2499884042]: votes: 1, expected: 3 flags: 8
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=2, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2483106826 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] lowest node id: -1811860470 us: -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[2499884042]: votes: 1, expected: 3 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=2, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2483106826 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] lowest node id: -1811860470 us: -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 2499884042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 402732042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[402732042]: votes: 1, expected: 3 flags: 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] total_votes=3, expected_votes=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2483106826 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 2499884042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] node 402732042 state=1, votes=1, expected=3
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] lowest node id: -1811860470 us: -1795083254
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 402732042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 402732042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[402732042]: votes: 1, expected: 3 flags: 1
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] got nodeinfo message from cluster node 402732042
- Nov 01 05:53:04 [32038] storage1 corosync debug [QUORUM] nodeinfo message[0]: votes: 0, expected: 0 flags: 0
- Nov 01 05:53:04 [32038] storage1 corosync debug [SYNC ] Committing synchronization for corosync configuration map access
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] My config version is 0 -> no action
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] got joinlist message from node 1801340a
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] comparing: sender r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) ; members(old:2 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] comparing: sender r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) ; members(old:2 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] comparing: sender r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ; members(old:1 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] chosen downlist: sender r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) ; members(old:2 left:0)
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] got joinlist message from node 9401340a
- Nov 01 05:53:04 [32038] storage1 corosync debug [SYNC ] Committing synchronization for corosync cluster closed process group service v1.01
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[0] group:crmd\x00, ip:r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) , pid:19031
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[1] group:attrd\x00, ip:r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) , pid:19029
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) , pid:19027
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[3] group:cib\x00, ip:r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) , pid:19025
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[4] group:pcmk\x00, ip:r(0) ip(10.52.1.148) r(1) ip(192.168.7.148) , pid:19023
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[5] group:crmd\x00, ip:r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) , pid:17871
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[6] group:attrd\x00, ip:r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) , pid:17869
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[7] group:stonith-ng\x00, ip:r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) , pid:17867
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[8] group:cib\x00, ip:r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) , pid:17865
- Nov 01 05:53:04 [32038] storage1 corosync debug [CPG ] joinlist_messages[9] group:pcmk\x00, ip:r(0) ip(10.52.1.24) r(1) ip(192.168.7.24) , pid:17863
- Nov 01 05:53:04 [32038] storage1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32050-26)
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] connecting to client [32050]
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] HUP conn (32039-32050-26)
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32050-26) state:2
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:04 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-response-32039-32050-26-header
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-event-32039-32050-26-header
- Nov 01 05:53:04 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-request-32039-32050-26-header
- Nov 01 05:53:05 [32055] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 5 retries remaining
- Nov 01 05:53:05 [32055] storage1 attrd_updater: info: crm_ipc_connect: Could not establish attrd connection: Connection refused (32)
- Nov 01 05:53:05 [32055] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 4 retries remaining
- Nov 01 05:53:05 [32055] storage1 attrd_updater: info: crm_ipc_connect: Could not establish attrd connection: Connection refused (32)
- Nov 01 05:53:06 [32055] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 3 retries remaining
- Nov 01 05:53:06 [32055] storage1 attrd_updater: info: crm_ipc_connect: Could not establish attrd connection: Connection refused (32)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32074-26)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32074]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc3ca700
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32074-27)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32074]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc3cb9d0
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] HUP conn (32039-32074-27)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32074-27) state:2
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc3cb9d0
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32074-27-header
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32074-27-header
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32074-27-header
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] HUP conn (32039-32074-26)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32074-26) state:2
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc3ca700
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32074-26-header
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: read_config: User configured file based logging and explicitly disabled syslog.
- Nov 01 05:53:06 [32074] storage1 pacemakerd: notice: main: Starting Pacemaker 1.1.8 (Build: 1f8858c): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart systemd corosync-native snmp libesmtp
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32074-26-header
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: main: Maximum core file size is: 18446744073709551615
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32074-26-header
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32074-26)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32074]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: cluster_connect_cfg: Our nodeid: -1795083254
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32074-27)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32074]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] lib_init_fn: conn=0x7fb8bc3ca700, cpd=0x7fb8bc3cd4e4
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: cluster_connect_cpg: Our nodeid: -1795083254
- Nov 01 05:53:06 [32074] storage1 pacemakerd: notice: update_node_processes: 0x11d6580 Node 2499884042 now known as storage1, was:
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] got procjoin message from cluster node -1795083254 (r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ) for pid 32074
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32076 for process cib
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000000102 (was 00000000000000000000000000000002)
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32078 for process stonith-ng
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000000102)
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32079 for process lrmd
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: get_cluster_type: Cluster type is: 'corosync'
- Nov 01 05:53:06 [32078] storage1 stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32080 for process attrd
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32078-28)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32078]
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32081 for process pengine
- Nov 01 05:53:06 [32076] storage1 cib: info: get_cluster_type: Cluster type is: 'corosync'
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
- Nov 01 05:53:06 [32076] storage1 cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: start_child: Forked child 32082 for process crmd
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage1 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
- Nov 01 05:53:06 [32074] storage1 pacemakerd: info: main: Starting mainloop
- Nov 01 05:53:06 [32074] storage1 pacemakerd: notice: update_node_processes: 0x13da8c0 Node 2483106826 now known as storage0, was:
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storage0 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000000000)
- Nov 01 05:53:06 [32074] storage1 pacemakerd: notice: update_node_processes: 0x13da100 Node 402732042 now known as storagequorum, was:
- Nov 01 05:53:06 [32074] storage1 pacemakerd: debug: update_node_processes: Node storagequorum now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000000000)
- Nov 01 05:53:06 [32076] storage1 cib: info: validate_with_relaxng: Creating RNG parser context
- Nov 01 05:53:06 [32079] storage1 lrmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Nov 01 05:53:06 [32079] storage1 lrmd: info: qb_ipcs_us_publish: server name: lrmd
- Nov 01 05:53:06 [32079] storage1 lrmd: info: main: Starting
- Nov 01 05:53:06 [32080] storage1 attrd: info: main: Starting up
- Nov 01 05:53:06 [32080] storage1 attrd: info: get_cluster_type: Cluster type is: 'corosync'
- Nov 01 05:53:06 [32080] storage1 attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] lib_init_fn: conn=0x7fb8bc3cfda0, cpd=0x7fb8bc3d03c4
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32080-29)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32080]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32082] storage1 crmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] lib_init_fn: conn=0x7fb8bc3d17e0, cpd=0x7fb8bc3d1ea4
- Nov 01 05:53:06 [32080] storage1 attrd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32082] storage1 crmd: notice: main: CRM Git Version: 1f8858c
- Nov 01 05:53:06 [32080] storage1 attrd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32080] storage1 attrd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: crm_get_peer: Creating entry for node (null)/2499884042
- Nov 01 05:53:06 [32080] storage1 attrd: debug: crm_get_peer: Creating entry for node (null)/2499884042
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_get_peer: Node <null> now has id: 2499884042
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_get_peer: Node <null> now has id: 2499884042
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_update_peer_proc: init_cpg_connection: Node (null)[-1795083254] - corosync-cpg is now online
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[-1795083254] - corosync-cpg is now online
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_get_peer: Node 2499884042 is now known as storage1
- Nov 01 05:53:06 [32080] storage1 attrd: info: init_cs_connection_once: Connection to 'corosync': established
- Nov 01 05:53:06 [32082] storage1 crmd: debug: crmd_init: Starting crmd
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_get_peer: Node 2499884042 has uuid 2499884042
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_get_peer: Node 2499884042 is now known as storage1
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_get_peer: Node 2499884042 has uuid 2499884042
- Nov 01 05:53:06 [32080] storage1 attrd: info: main: Cluster connection active
- Nov 01 05:53:06 [32080] storage1 attrd: info: qb_ipcs_us_publish: server name: attrd
- Nov 01 05:53:06 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
- Nov 01 05:53:06 [32082] storage1 crmd: debug: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
- Nov 01 05:53:06 [32080] storage1 attrd: info: main: Accepting attribute updates
- Nov 01 05:53:06 [32082] storage1 crmd: debug: do_startup: Registering Signal Handlers
- Nov 01 05:53:06 [32082] storage1 crmd: debug: do_startup: Creating CIB and LRM objects
- Nov 01 05:53:06 [32080] storage1 attrd: notice: main: Starting mainloop...
- Nov 01 05:53:06 [32082] storage1 crmd: info: get_cluster_type: Cluster type is: 'corosync'
- Nov 01 05:53:06 [32082] storage1 crmd: info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
- Nov 01 05:53:06 [32082] storage1 crmd: debug: cib_native_signon_raw: Connection unsuccessful (0 (nil))
- Nov 01 05:53:06 [32082] storage1 crmd: debug: cib_native_signon_raw: Connection to CIB failed: Transport endpoint is not connected
- Nov 01 05:53:06 [32082] storage1 crmd: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] got procjoin message from cluster node -1795083254 (r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ) for pid 32078
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] got procjoin message from cluster node -1795083254 (r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ) for pid 32080
- Nov 01 05:53:06 [32080] storage1 attrd: info: pcmk_cpg_membership: Joined[0.0] attrd.-1795083254
- Nov 01 05:53:06 [32080] storage1 attrd: debug: crm_get_peer: Creating entry for node (null)/402732042
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_get_peer: Node <null> now has id: 402732042
- Nov 01 05:53:06 [32080] storage1 attrd: info: pcmk_cpg_membership: Member[0.0] attrd.402732042
- Nov 01 05:53:06 [32078] storage1 stonith-ng: info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[402732042] - corosync-cpg is now online
- Nov 01 05:53:06 [32080] storage1 attrd: debug: crm_get_peer: Creating entry for node (null)/2483106826
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_get_peer: Node <null> now has id: 2483106826
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: cib_native_signon_raw: Connection unsuccessful (0 (nil))
- Nov 01 05:53:06 [32080] storage1 attrd: info: pcmk_cpg_membership: Member[0.1] attrd.-1811860470
- Nov 01 05:53:06 [32080] storage1 attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1811860470] - corosync-cpg is now online
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: cib_native_signon_raw: Connection to CIB failed: Transport endpoint is not connected
- Nov 01 05:53:06 [32080] storage1 attrd: info: pcmk_cpg_membership: Member[0.2] attrd.-1795083254
- Nov 01 05:53:06 [32078] storage1 stonith-ng: debug: cib_native_signoff: Signing out of the CIB Service
- Nov 01 05:53:06 [32081] storage1 pengine: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Nov 01 05:53:06 [32081] storage1 pengine: debug: main: Checking for old instances of pengine
- Nov 01 05:53:06 [32081] storage1 pengine: info: crm_ipc_connect: Could not establish pengine connection: Connection refused (111)
- Nov 01 05:53:06 [32081] storage1 pengine: debug: main: Terminating previous instance
- Nov 01 05:53:06 [32081] storage1 pengine: debug: main: Init server comms
- Nov 01 05:53:06 [32081] storage1 pengine: info: qb_ipcs_us_publish: server name: pengine
- Nov 01 05:53:06 [32081] storage1 pengine: info: main: Starting pengine
- Nov 01 05:53:06 [32076] storage1 cib: debug: activateCibXml: Triggering CIB write for start op
- Nov 01 05:53:06 [32076] storage1 cib: info: startCib: CIB Initialization completed successfully
- Nov 01 05:53:06 [32076] storage1 cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32076-30)
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] connecting to client [32076]
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] lib_init_fn: conn=0x7fb8bc3d81f0, cpd=0x7fb8bc3d8d74
- Nov 01 05:53:06 [32076] storage1 cib: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32076] storage1 cib: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32076] storage1 cib: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:06 [32076] storage1 cib: debug: crm_get_peer: Creating entry for node (null)/2499884042
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_get_peer: Node <null> now has id: 2499884042
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_update_peer_proc: init_cpg_connection: Node (null)[-1795083254] - corosync-cpg is now online
- Nov 01 05:53:06 [32076] storage1 cib: info: init_cs_connection_once: Connection to 'corosync': established
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_get_peer: Node 2499884042 is now known as storage1
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_get_peer: Node 2499884042 has uuid 2499884042
- Nov 01 05:53:06 [32076] storage1 cib: info: qb_ipcs_us_publish: server name: cib_ro
- Nov 01 05:53:06 [32076] storage1 cib: info: qb_ipcs_us_publish: server name: cib_rw
- Nov 01 05:53:06 [32076] storage1 cib: info: qb_ipcs_us_publish: server name: cib_shm
- Nov 01 05:53:06 [32076] storage1 cib: info: cib_init: Starting cib mainloop
- Nov 01 05:53:06 [32038] storage1 corosync debug [CPG ] got procjoin message from cluster node -1795083254 (r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ) for pid 32076
- Nov 01 05:53:06 [32076] storage1 cib: info: pcmk_cpg_membership: Joined[0.0] cib.-1795083254
- Nov 01 05:53:06 [32076] storage1 cib: debug: crm_get_peer: Creating entry for node (null)/402732042
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_get_peer: Node <null> now has id: 402732042
- Nov 01 05:53:06 [32076] storage1 cib: info: pcmk_cpg_membership: Member[0.0] cib.402732042
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[402732042] - corosync-cpg is now online
- Nov 01 05:53:06 [32076] storage1 cib: debug: crm_get_peer: Creating entry for node (null)/2483106826
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_get_peer: Node <null> now has id: 2483106826
- Nov 01 05:53:06 [32076] storage1 cib: info: pcmk_cpg_membership: Member[0.1] cib.-1811860470
- Nov 01 05:53:06 [32076] storage1 cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1811860470] - corosync-cpg is now online
- Nov 01 05:53:06 [32076] storage1 cib: info: pcmk_cpg_membership: Member[0.2] cib.-1795083254
- Nov 01 05:53:07 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32078-11)
- Nov 01 05:53:07 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-32078-11)
- Nov 01 05:53:07 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32082-13)
- Nov 01 05:53:07 [32076] storage1 cib: debug: qb_ipcs_shm_connect: connecting to client [32082]
- Nov 01 05:53:07 [32076] storage1 cib: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32076] storage1 cib: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32076] storage1 cib: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:524288; real_size:524288; rb->word_size:131072
- Nov 01 05:53:07 [32076] storage1 cib: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for crmd (04ed2d7b-d0e9-4049-be9c-8682e33d720a): on
- Nov 01 05:53:07 [32078] storage1 stonith-ng: notice: setup_cib: Watching for stonith topology changes
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: main: Starting stonith-ng mainloop
- Nov 01 05:53:07 [32082] storage1 crmd: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: pcmk_cpg_membership: Joined[0.0] stonith-ng.-1795083254
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: crm_get_peer: Creating entry for node (null)/402732042
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node <null> now has id: 402732042
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: pcmk_cpg_membership: Member[0.0] stonith-ng.402732042
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[402732042] - corosync-cpg is now online
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 402732042
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: crm_get_peer: Creating entry for node (null)/2483106826
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node <null> now has id: 2483106826
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: pcmk_cpg_membership: Member[0.1] stonith-ng.-1811860470
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1811860470] - corosync-cpg is now online
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 2483106826
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: pcmk_cpg_membership: Member[0.2] stonith-ng.-1795083254
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node 2483106826 is now known as storage0
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 2483106826
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node 2483106826 has uuid 2483106826
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node 402732042 is now known as storagequorum
- Nov 01 05:53:07 [32078] storage1 stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 402732042
- Nov 01 05:53:07 [32078] storage1 stonith-ng: info: crm_get_peer: Node 402732042 has uuid 402732042
- Nov 01 05:53:07 [32076] storage1 cib: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for crmd (164ec415-f12e-41b6-8b99-58f35b7228ab): on
- Nov 01 05:53:07 [32076] storage1 cib: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for crmd (164ec415-f12e-41b6-8b99-58f35b7228ab): on
- Nov 01 05:53:07 [32082] storage1 crmd: info: do_cib_control: CIB connection established
- Nov 01 05:53:07 [32082] storage1 crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-31)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [CPG ] lib_init_fn: conn=0x7fb8bc3ce320, cpd=0x7fb8bc3ce894
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: crm_get_peer: Creating entry for node (null)/2499884042
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node <null> now has id: 2499884042
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[-1795083254] - corosync-cpg is now online
- Nov 01 05:53:07 [32082] storage1 crmd: info: init_cs_connection_once: Connection to 'corosync': established
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 2499884042 is now known as storage1
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storage1 is now (null)
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 2499884042 has uuid 2499884042
- Nov 01 05:53:07 [32082] storage1 crmd: debug: init_quorum_connection: Configuring Pacemaker to obtain quorum from Corosync
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-32)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [CPG ] got procjoin message from cluster node -1795083254 (r(0) ip(10.52.1.149) r(1) ip(192.168.7.149) ) for pid 32082
- Nov 01 05:53:07 [32082] storage1 crmd: notice: init_quorum_connection: Quorum acquired
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] HUP conn (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32082-33) state:2
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] HUP conn (32039-32082-33)
- Nov 01 05:53:07 [32082] storage1 crmd: info: do_ha_control: Connected to the cluster
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32082-33) state:2
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: do_lrm_control: Connecting to the LRM
- Nov 01 05:53:07 [32082] storage1 crmd: info: lrmd_api_connect: Connecting to lrmd
- Nov 01 05:53:07 [32079] storage1 lrmd: info: lrmd_ipc_accept: Accepting client connection: 0xec1c00 pid=32082 for uid=107 gid=0
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: handle_new_connection: IPC credentials authenticated (32079-32082-7)
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: qb_ipcs_shm_connect: connecting to client [32082]
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:20480; real_size:20480; rb->word_size:5120
- Nov 01 05:53:07 [32079] storage1 lrmd: debug: process_lrmd_message: Processed register operation from f0b42b02-ba29-4145-8bf0-993927ad38ff: rc=0, reply=0, notify=0, exit=0
- Nov 01 05:53:07 [32082] storage1 crmd: debug: do_lrm_control: LRM connection established
- Nov 01 05:53:07 [32082] storage1 crmd: info: do_started: Delaying start, no membership data (0000000000100000)
- Nov 01 05:53:07 [32082] storage1 crmd: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
- Nov 01 05:53:07 [32082] storage1 crmd: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=0.144.1): OK (rc=0)
- Nov 01 05:53:07 [32082] storage1 crmd: info: pcmk_quorum_notification: Membership 1841668: quorum retained (3)
- Nov 01 05:53:07 [32082] storage1 crmd: debug: pcmk_quorum_notification: Member[0] 402732042
- Nov 01 05:53:07 [32082] storage1 crmd: debug: crm_get_peer: Creating entry for node (null)/402732042
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node <null> now has id: 402732042
- Nov 01 05:53:07 [32082] storage1 crmd: info: pcmk_quorum_notification: Obtaining name for new node 402732042
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-34)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: notice: corosync_node_name: Inferred node name 'storagequorum' for nodeid 402732042 from DNS
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 402732042 is now known as storagequorum
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] HUP conn (32039-32082-33)
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storagequorum is now (null)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32082-33) state:2
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 402732042 has uuid 402732042
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:07 [32082] storage1 crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node storagequorum[402732042] - state is now member
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storagequorum is now member (was (null))
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:07 [32082] storage1 crmd: debug: pcmk_quorum_notification: Member[1] -1811860470
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: crm_get_peer: Creating entry for node (null)/2483106826
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node <null> now has id: 2483106826
- Nov 01 05:53:07 [32082] storage1 crmd: info: pcmk_quorum_notification: Obtaining name for new node 2483106826
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32082-35)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] connecting to client [32082]
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_open: shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:53:07 [32082] storage1 crmd: notice: corosync_node_name: Inferred node name 'storage0' for nodeid 2483106826 from DNS
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] HUP conn (32039-32082-33)
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32082-33) state:2
- Nov 01 05:53:07 [32082] storage1 crmd: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 2483106826 is now known as storage0
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storage0 is now (null)
- Nov 01 05:53:07 [32082] storage1 crmd: info: crm_get_peer: Node 2483106826 has uuid 2483106826
- Nov 01 05:53:07 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node storage0[2483106826] - state is now member
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storage0 is now member (was (null))
- Nov 01 05:53:07 [32082] storage1 crmd: debug: pcmk_quorum_notification: Member[2] -1795083254
- Nov 01 05:53:07 [32082] storage1 crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node storage1[2499884042] - state is now member
- Nov 01 05:53:07 [32082] storage1 crmd: info: peer_update_callback: storage1 is now member (was (null))
- Nov 01 05:53:07 [32082] storage1 crmd: debug: post_cache_update: Updated cache after membership event 1841668.
- Nov 01 05:53:07 [32082] storage1 crmd: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
- Nov 01 05:53:07 [32082] storage1 crmd: info: do_started: Delaying start, Config not read (0000000000000040)
- Nov 01 05:53:07 [32082] storage1 crmd: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
- Nov 01 05:53:07 [32082] storage1 crmd: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: debug: config_query_callback: Call 4 : Parsing CIB options
- Nov 01 05:53:07 [32082] storage1 crmd: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Nov 01 05:53:07 [32082] storage1 crmd: debug: config_query_callback: Checking for expired actions every 900000ms
- Nov 01 05:53:07 [32082] storage1 crmd: debug: do_started: Init server comms
- Nov 01 05:53:07 [32082] storage1 crmd: info: qb_ipcs_us_publish: server name: crmd
- Nov 01 05:53:07 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32082-33-header
- Nov 01 05:53:07 [32082] storage1 crmd: notice: do_started: The local CRM is operational
- Nov 01 05:53:07 [32082] storage1 crmd: debug: do_election_check: Ignore election check: we not in an election
- Nov 01 05:53:07 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
- Nov 01 05:53:07 [32082] storage1 crmd: debug: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
- Nov 01 05:53:07 [32082] storage1 crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- Nov 01 05:53:07 [32076] storage1 cib: info: crm_get_peer: Node 402732042 is now known as storagequorum
- Nov 01 05:53:07 [32076] storage1 cib: info: crm_get_peer: Node 402732042 has uuid 402732042
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_process_diff: Diff 0.144.136 -> 0.144.137 from storagequorum not applied to 0.144.1: current "num_updates" is less than required
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_server_process_diff: Requesting re-sync from peer
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_process_replace: Digest matched on replace from storagequorum: 192c557a6e3c64906f4577534aaa8af7
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_process_replace: Replaced 0.144.1 with 0.144.137 from storagequorum
- Nov 01 05:53:07 [32076] storage1 cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Nov 01 05:53:07 [32076] storage1 cib: info: cib_replace_notify: Replaced: 0.144.1 -> 0.144.137 from storagequorum
- Nov 01 05:53:08 [32055] storage1 attrd_updater: info: attrd_update_delegate: Connecting to cluster... 2 retries remaining
- Nov 01 05:53:08 [32080] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (32080-32055-8)
- Nov 01 05:53:08 [32080] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (32080-32055-8)
- Nov 01 05:53:08 [32055] storage1 attrd_updater: debug: attrd_update_delegate: Sent update: p_ping=5000 for localhost
- Nov 01 05:53:08 [32080] storage1 attrd: debug: attrd_local_callback: update message from attrd_updater: p_ping=5000
- Nov 01 05:53:08 [32080] storage1 attrd: info: find_hash_entry: Creating hash entry for p_ping
- Nov 01 05:53:08 [32080] storage1 attrd: debug: attrd_local_callback: Supplied: 5000, Current: (null), Stored: (null)
- Nov 01 05:53:08 [32080] storage1 attrd: debug: attrd_local_callback: New value of p_ping is 5000
- Nov 01 05:53:08 [32080] storage1 attrd: debug: qb_ipcs_dispatch_connection_request: HUP conn (32080-32055-8)
- Nov 01 05:53:08 [32080] storage1 attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32080-32055-8) state:2
- Nov 01 05:53:08 [32082] storage1 crmd: debug: do_cl_join_query: Querying for a DC
- Nov 01 05:53:08 [32082] storage1 crmd: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=14
- Nov 01 05:53:08 [32082] storage1 crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
- Nov 01 05:53:08 [32082] storage1 crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.-1795083254
- Nov 01 05:53:08 [32082] storage1 crmd: info: pcmk_cpg_membership: Member[0.0] crmd.402732042
- Nov 01 05:53:08 [32082] storage1 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node storagequorum[402732042] - corosync-cpg is now online
- Nov 01 05:53:08 [32082] storage1 crmd: info: peer_update_callback: Client storagequorum/peer now has status [online] (DC=<null>)
- Nov 01 05:53:08 [32082] storage1 crmd: info: pcmk_cpg_membership: Member[0.1] crmd.-1811860470
- Nov 01 05:53:08 [32082] storage1 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node storage0[-1811860470] - corosync-cpg is now online
- Nov 01 05:53:08 [32082] storage1 crmd: info: peer_update_callback: Client storage0/peer now has status [online] (DC=<null>)
- Nov 01 05:53:08 [32082] storage1 crmd: info: pcmk_cpg_membership: Member[0.2] crmd.-1795083254
- Nov 01 05:53:08 [32082] storage1 crmd: debug: handle_request: Raising I_JOIN_OFFER: join-89
- Nov 01 05:53:08 [32082] storage1 crmd: debug: handle_request: Raising I_JOIN_OFFER: join-90
- Nov 01 05:53:08 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
- Nov 01 05:53:08 [32082] storage1 crmd: info: update_dc: Set DC to storagequorum (3.0.6)
- Nov 01 05:53:08 [32082] storage1 crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
- Nov 01 05:53:08 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
- Nov 01 05:53:08 [32082] storage1 crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
- Nov 01 05:53:08 [32082] storage1 crmd: debug: te_connect_stonith: Attempting connection to fencing daemon...
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: handle_new_connection: IPC credentials authenticated (32078-32082-10)
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: qb_ipcs_us_connect: connecting to client (32078-32082-10)
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: stonith_command: Processing register from crmd.32082 ( 0)
- Nov 01 05:53:09 [32082] storage1 crmd: debug: stonith_api_signon: Connection to STONITH successful
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: stonith_command: Processing st_notify from crmd.32082 ( 0)
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: stonith_command: Setting st_notify_disconnect callbacks for crmd.32082 (561a7748-a100-48df-88a4-995128f9418c): ON
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: stonith_command: Processing st_notify from crmd.32082 ( 0)
- Nov 01 05:53:09 [32078] storage1 stonith-ng: debug: stonith_command: Setting st_notify_fence callbacks for crmd.32082 (561a7748-a100-48df-88a4-995128f9418c): ON
- Nov 01 05:53:09 [32082] storage1 crmd: debug: join_query_callback: Respond to join offer join-90
- Nov 01 05:53:09 [32082] storage1 crmd: debug: join_query_callback: Acknowledging storagequorum as our DC
- Nov 01 05:53:09 [32082] storage1 crmd: debug: handle_request: Raising I_JOIN_RESULT: join-90
- Nov 01 05:53:09 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
- Nov 01 05:53:09 [32082] storage1 crmd: debug: do_cl_join_finalize_respond: Confirming join join-90: join_ack_nack
- Nov 01 05:53:09 [32082] storage1 crmd: debug: do_cl_join_finalize_respond: join-90: Join complete. Sending local LRM status to storagequorum
- Nov 01 05:53:09 [32082] storage1 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='storage1']/transient_attributes
- Nov 01 05:53:09 [32082] storage1 crmd: info: update_attrd: Connecting to attrd... 5 retries remaining
- Nov 01 05:53:09 [32080] storage1 attrd: debug: handle_new_connection: IPC credentials authenticated (32080-32082-8)
- Nov 01 05:53:09 [32080] storage1 attrd: debug: qb_ipcs_us_connect: connecting to client (32080-32082-8)
- Nov 01 05:53:09 [32082] storage1 crmd: debug: attrd_update_delegate: Sent update: terminate=(null) for storage1
- Nov 01 05:53:09 [32082] storage1 crmd: debug: attrd_update_delegate: Sent update: shutdown=(null) for storage1
- Nov 01 05:53:09 [32080] storage1 attrd: debug: attrd_local_callback: update message from crmd: terminate=<null>
- Nov 01 05:53:09 [32080] storage1 attrd: info: find_hash_entry: Creating hash entry for terminate
- Nov 01 05:53:09 [32080] storage1 attrd: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
- Nov 01 05:53:09 [32082] storage1 crmd: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
- Nov 01 05:53:09 [32082] storage1 crmd: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Nov 01 05:53:09 [32082] storage1 crmd: debug: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
- Nov 01 05:53:09 [32080] storage1 attrd: debug: attrd_local_callback: update message from crmd: shutdown=<null>
- Nov 01 05:53:09 [32080] storage1 attrd: info: find_hash_entry: Creating hash entry for shutdown
- Nov 01 05:53:09 [32080] storage1 attrd: debug: attrd_local_callback: Supplied: (null), Current: (null), Stored: (null)
- Nov 01 05:53:09 [32082] storage1 crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Nov 01 05:53:09 [32080] storage1 attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Nov 01 05:53:09 [32080] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (5000)
- Nov 01 05:53:09 [32076] storage1 cib: info: cib_process_replace: Digest matched on replace from storagequorum: 192c557a6e3c64906f4577534aaa8af7
- Nov 01 05:53:09 [32076] storage1 cib: info: cib_process_replace: Replaced 0.144.137 with 0.144.137 from storagequorum
- Nov 01 05:53:09 [32080] storage1 attrd: info: crm_get_peer: Node 2483106826 is now known as storage0
- Nov 01 05:53:09 [32080] storage1 attrd: info: crm_get_peer: Node 2483106826 has uuid 2483106826
- Nov 01 05:53:09 [32080] storage1 attrd: info: attrd_perform_update: Delaying operation p_ping=5000: cib not connected
- Nov 01 05:53:09 [32080] storage1 attrd: info: attrd_perform_update: Delaying operation p_ping=5000: cib not connected
- Nov 01 05:53:09 [32076] storage1 cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Nov 01 05:53:09 [32080] storage1 attrd: info: find_hash_entry: Creating hash entry for master-p_drbd_drives
- Nov 01 05:53:09 [32080] storage1 attrd: info: attrd_perform_update: Delaying operation master-p_drbd_drives=<null>: cib not connected
- Nov 01 05:53:09 [32080] storage1 attrd: info: find_hash_entry: Creating hash entry for probe_complete
- Nov 01 05:53:09 [32076] storage1 cib: info: crm_get_peer: Node 2483106826 is now known as storage0
- Nov 01 05:53:09 [32080] storage1 attrd: info: attrd_perform_update: Delaying operation probe_complete=<null>: cib not connected
- Nov 01 05:53:09 [32076] storage1 cib: info: crm_get_peer: Node 2483106826 has uuid 2483106826
- Nov 01 05:53:10 [32082] storage1 crmd: debug: erase_xpath_callback: Deletion of "//node_state[@uname='storage1']/transient_attributes": OK (rc=0)
- Nov 01 05:53:10 [32080] storage1 attrd: info: crm_get_peer: Node 402732042 is now known as storagequorum
- Nov 01 05:53:10 [32080] storage1 attrd: info: crm_get_peer: Node 402732042 has uuid 402732042
- Nov 01 05:53:10 [32080] storage1 attrd: info: attrd_perform_update: Delaying operation probe_complete=<null>: cib not connected
- Nov 01 05:53:11 [32080] storage1 attrd: debug: cib_connect: CIB signon attempt 1
- Nov 01 05:53:11 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32080-15)
- Nov 01 05:53:11 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-32080-15)
- Nov 01 05:53:11 [32080] storage1 attrd: debug: cib_native_signon_raw: Connection to CIB successful
- Nov 01 05:53:11 [32080] storage1 attrd: info: cib_connect: Connected to the CIB after 1 signon attempts
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for attrd (0189c569-a167-4cf0-80bd-f1d3f0d34c37): on
- Nov 01 05:53:11 [32080] storage1 attrd: info: cib_connect: Sending full refresh now that we're connected to the cib
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='2499884042']//transient_attributes//nvpair[@name='shutdown'] does not exist
- Nov 01 05:53:11 [32080] storage1 attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='2499884042']//transient_attributes//nvpair[@name='p_ping'] does not exist
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
- Nov 01 05:53:11 [32080] storage1 attrd: notice: attrd_perform_update: Sent update 5: p_ping=5000
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='2499884042']//transient_attributes//nvpair[@name='master-p_drbd_drives'] does not exist
- Nov 01 05:53:11 [32080] storage1 attrd: warning: attrd_cib_callback: Update master-p_drbd_drives=(null) failed: No such device or address
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='2499884042']//transient_attributes//nvpair[@name='terminate'] does not exist
- Nov 01 05:53:11 [32080] storage1 attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
- Nov 01 05:53:11 [32076] storage1 cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='2499884042']//transient_attributes//nvpair[@name='probe_complete'] does not exist
- Nov 01 05:53:11 [32080] storage1 attrd: warning: attrd_cib_callback: Update probe_complete=(null) failed: No such device or address
- Nov 01 05:53:11 [32080] storage1 attrd: debug: attrd_cib_callback: Update 5 for p_ping=5000 passed
- Nov 01 05:54:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32225-17)
- Nov 01 05:54:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-32225-17)
- Nov 01 05:54:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-32225-17)
- Nov 01 05:54:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-32225-17) state:2
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32242-33)
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] connecting to client [32242]
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:54:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-32242-33)
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32242-33) state:2
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:54:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:54:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32242-33-header
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32242-33-header
- Nov 01 05:54:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32242-33-header
- Nov 01 05:55:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32442-17)
- Nov 01 05:55:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-32442-17)
- Nov 01 05:55:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-32442-17)
- Nov 01 05:55:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-32442-17) state:2
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32459-33)
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] connecting to client [32459]
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:55:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-32459-33)
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32459-33) state:2
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:55:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:55:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32459-33-header
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32459-33-header
- Nov 01 05:55:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32459-33-header
- Nov 01 05:56:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-32654-17)
- Nov 01 05:56:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-32654-17)
- Nov 01 05:56:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-32654-17)
- Nov 01 05:56:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-32654-17) state:2
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-32671-33)
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] connecting to client [32671]
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:56:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-32671-33)
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-32671-33) state:2
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:56:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:56:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-32671-33-header
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-32671-33-header
- Nov 01 05:56:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-32671-33-header
- Nov 01 05:57:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-402-17)
- Nov 01 05:57:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-402-17)
- Nov 01 05:57:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-402-17)
- Nov 01 05:57:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-402-17) state:2
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-419-33)
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] connecting to client [419]
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:57:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-419-33)
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-419-33) state:2
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:57:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:57:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-419-33-header
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-419-33-header
- Nov 01 05:57:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-419-33-header
- Nov 01 05:58:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-619-17)
- Nov 01 05:58:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-619-17)
- Nov 01 05:58:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-619-17)
- Nov 01 05:58:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-619-17) state:2
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-637-33)
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] connecting to client [637]
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:58:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-637-33)
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-637-33) state:2
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:58:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:58:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-637-33-header
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-637-33-header
- Nov 01 05:58:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-637-33-header
- Nov 01 05:59:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-836-17)
- Nov 01 05:59:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-836-17)
- Nov 01 05:59:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-836-17)
- Nov 01 05:59:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-836-17) state:2
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-856-33)
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] connecting to client [856]
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 05:59:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-856-33)
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-856-33) state:2
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 05:59:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 05:59:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-856-33-header
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-856-33-header
- Nov 01 05:59:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-856-33-header
- Nov 01 06:00:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-1053-17)
- Nov 01 06:00:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-1053-17)
- Nov 01 06:00:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-1053-17)
- Nov 01 06:00:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-1053-17) state:2
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-1070-33)
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] connecting to client [1070]
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:00:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-1070-33)
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-1070-33) state:2
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 06:00:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 06:00:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-1070-33-header
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-1070-33-header
- Nov 01 06:00:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-1070-33-header
- Nov 01 06:01:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-1277-17)
- Nov 01 06:01:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-1277-17)
- Nov 01 06:01:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-1277-17)
- Nov 01 06:01:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-1277-17) state:2
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-1294-33)
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] connecting to client [1294]
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:01:01 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] HUP conn (32039-1294-33)
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-1294-33) state:2
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 06:01:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 06:01:01 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-1294-33-header
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-1294-33-header
- Nov 01 06:01:01 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-1294-33-header
- Nov 01 06:02:01 [32076] storage1 cib: debug: handle_new_connection: IPC credentials authenticated (32076-1489-17)
- Nov 01 06:02:01 [32076] storage1 cib: debug: qb_ipcs_us_connect: connecting to client (32076-1489-17)
- Nov 01 06:02:01 [32076] storage1 cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (32076-1489-17)
- Nov 01 06:02:01 [32076] storage1 cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(32076-1489-17) state:2
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] IPC credentials authenticated (32039-1506-33)
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] connecting to client [1506]
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] shm size:1048576; real_size:1048576; rb->word_size:262144
- Nov 01 06:02:02 [32038] storage1 corosync debug [MAIN ] connection created
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] lib_init_fn: conn=0x7fb8bc8dbc80
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] HUP conn (32039-1506-33)
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] qb_ipcs_disconnect(32039-1506-33) state:2
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] epoll_ctl(del): Bad file descriptor (9)
- Nov 01 06:02:02 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_closed()
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] exit_fn for conn=0x7fb8bc8dbc80
- Nov 01 06:02:02 [32038] storage1 corosync debug [MAIN ] cs_ipcs_connection_destroyed()
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-response-32039-1506-33-header
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-event-32039-1506-33-header
- Nov 01 06:02:02 [32038] storage1 corosync debug [QB ] Free'ing ringbuffer: /dev/shm/qb-cmap-request-32039-1506-33-header
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement