Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- On cluster1:
- /var/log/messages:
- Jul 2 21:23:43 cluster1 cibadmin[26066]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin -c -R --xml-text <constraints><rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/></constraints>
- Jul 2 21:23:48 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:23:57 cluster1 cibadmin[26319]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin -o constraints -R --xml-text <constraints>#012 <rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/>#012<rsc_order first="FirewallVMDiskClone" first-action="promote" id="order-FirewallVMDiskClone-FirewallVM-mandatory" then="FirewallVM" then-action="start"/></constraints>
- Jul 2 21:24:09 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:24:20 cluster1 cibadmin[27540]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace -o configuration -V -X <cib admin_epoch="0" cib-last-written="Wed Jul 2 19:07:10 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="19" have-quorum="1" num_updates="1" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">#012 <configuration>#012 <crm_config>#012 <cluster_property_set id="cib-bootstrap-options">#012 <nvpair id="cib-bootstrap-options-dc-version" name="d
- Jul 2 21:24:30 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:24:51 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:25:12 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:25:33 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:25:54 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:26:15 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:26:36 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:26:57 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:27:16 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:27:18 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:27:21 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
- Jul 2 21:27:26 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:27:31 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
- Jul 2 21:27:36 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:27:39 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:27:41 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
- Jul 2 21:27:46 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:27:51 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
- Jul 2 21:27:59 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:28:21 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:28:31 cluster1 kernel: ACPI Error: SMBus or IPMI write requires Buffer of length 42, found length 20 (20090903/exfield-286)
- Jul 2 21:28:31 cluster1 kernel: ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PMI0._PMM] (Node ffff88031ad476c8), AE_AML_BUFFER_LIMIT
- Jul 2 21:28:31 cluster1 kernel: ACPI Exception: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20090903/power_meter-341)
- Jul 2 21:28:42 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:29:03 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:29:16 cluster1 cibadmin[6209]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace --xml-file firewall_cfg
- Jul 2 21:29:24 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:29:34 cluster1 cibadmin[7437]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace --xml-file firewall_cfg
- Jul 2 21:29:34 cluster1 crmd[16343]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Jul 2 21:29:34 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.14.5 -> 0.21.1 from cluster2.verolengo.privatelan not applied to 0.14.5: Failed application of an update diff
- Jul 2 21:29:34 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.1 -> 0.21.2 (sync in progress)
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.14.5 -> 0.21.2 from local not applied to 0.14.5: Failed application of an update diff
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
- Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
- Jul 2 21:29:35 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.21.2 -> 0.21.3 from cluster2.verolengo.privatelan not applied to 0.21.2: Failed application of an update diff
- Jul 2 21:29:35 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.3 -> 0.21.4 (sync in progress)
- Jul 2 21:29:35 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.21.2 -> 0.21.4 from local not applied to 0.21.2: Failed application of an update diff
- Jul 2 21:29:35 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:29:36 cluster1 attrd[16341]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Jul 2 21:29:36 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Jul 2 21:29:36 cluster1 crmd[16343]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Jul 2 21:29:36 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.21.4 -> 0.21.5 from cluster2.verolengo.privatelan not applied to 0.21.4: Failed application of an update diff
- Jul 2 21:29:36 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.5 -> 0.21.6 (sync in progress)
- Jul 2 21:29:36 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.21.4 -> 0.21.6 from local not applied to 0.21.4: Failed application of an update diff
- Jul 2 21:29:36 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:29:36 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_monitor_0 (call=59, rc=7, cib-update=36, confirmed=true) not running
- Jul 2 21:29:38 cluster1 crm_resource[7519]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 crm_resource[7521]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:29:38 cluster1 crm_resource[7529]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 crm_resource[7531]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
- Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVM_monitor_0 (call=63, rc=7, cib-update=37, confirmed=true) not running
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
- Jul 2 21:29:38 cluster1 kernel: drbd: events: mcg drbd: 2
- Jul 2 21:29:38 cluster1 kernel: drbd: initialized. Version: 8.4.5 (api:1/proto:86-101)
- Jul 2 21:29:38 cluster1 kernel: drbd: GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by manutenzione@buildhost.local, 2014-06-23 02:06:26
- Jul 2 21:29:38 cluster1 kernel: drbd: registered as block device major 147
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
- Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Starting worker thread (from drbdsetup-84 [7653])
- Jul 2 21:29:38 cluster1 kernel: block drbd0: disk( Diskless -> Attaching )
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Method to ensure write ordering: drain
- Jul 2 21:29:38 cluster1 kernel: block drbd0: max BIO size = 1048576
- Jul 2 21:29:38 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 104854328
- Jul 2 21:29:38 cluster1 kernel: block drbd0: resync bitmap: bits=13106791 words=204794 pages=400
- Jul 2 21:29:38 cluster1 kernel: block drbd0: size = 50 GB (52427164 KB)
- Jul 2 21:29:38 cluster1 kernel: block drbd0: recounting of set bits took additional 3 jiffies
- Jul 2 21:29:38 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Jul 2 21:29:38 cluster1 kernel: block drbd0: disk( Attaching -> Consistent )
- Jul 2 21:29:38 cluster1 kernel: block drbd0: attached to UUIDs C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED00:0000000000000004
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: conn( StandAlone -> Unconnected )
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Starting receiver thread (from drbd_w_firewall [7654])
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: receiver (re)started
- Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
- Jul 2 21:29:38 cluster1 crm_node[7692]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 crm_attribute[7693]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:38 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (5)
- Jul 2 21:29:38 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 177: master-FirewallVMDisk=5
- Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_start_0 (call=67, rc=0, cib-update=38, confirmed=true) ok
- Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=70, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=73, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Handshake successful: Agreed network protocol version 101
- Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Agreed to support TRIM on protocol level
- Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> WFReportParams )
- Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Starting asender thread (from drbd_r_firewall [7677])
- Jul 2 21:29:39 cluster1 kernel: block drbd0: drbd_sync_handshake:
- Jul 2 21:29:39 cluster1 kernel: block drbd0: self C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED00:0000000000000004 bits:0 flags:0
- Jul 2 21:29:39 cluster1 kernel: block drbd0: peer C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED01:0000000000000004 bits:0 flags:0
- Jul 2 21:29:39 cluster1 kernel: block drbd0: uuid_compare()=0 by rule 40
- Jul 2 21:29:39 cluster1 kernel: block drbd0: peer( Unknown -> Secondary ) conn( WFReportParams -> Connected ) disk( Consistent -> UpToDate ) pdsk( DUnknown -> UpToDate )
- Jul 2 21:29:44 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:29:54 cluster1 crm_node[8043]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 crm_attribute[8044]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (10000)
- Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 183: master-FirewallVMDisk=10000
- Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 185: master-FirewallVMDisk=10000
- Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=76, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=79, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:54 cluster1 crm_node[8104]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 crm_attribute[8105]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=82, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=85, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: peer( Secondary -> Unknown ) conn( Connected -> TearDown ) pdsk( UpToDate -> DUnknown )
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: asender terminated
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Terminating drbd_a_firewall
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Connection closed
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: conn( TearDown -> Unconnected )
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: receiver terminated
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Restarting receiver thread
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: receiver (re)started
- Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
- Jul 2 21:29:54 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.23.14 -> 0.23.15 from cluster2.verolengo.privatelan not applied to 0.23.14: Failed application of an update diff
- Jul 2 21:29:54 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.14 -> 0.23.15 from local not applied to 0.23.14: Failed application of an update diff
- Jul 2 21:29:54 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:29:54 cluster1 crm_node[8166]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 crm_attribute[8167]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (1000)
- Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 201: master-FirewallVMDisk=1000
- Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=88, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=91, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=94, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=97, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm
- Jul 2 21:29:55 cluster1 rhcs_fence: Attempting to fence peer using RHCS from DRBD...
- Jul 2 21:29:55 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: Handshake successful: Agreed network protocol version 101
- Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: Agreed to support TRIM on protocol level
- Jul 2 21:29:55 cluster1 fence_pcmk[8330]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
- Jul 2 21:29:55 cluster1 stonith_admin[8331]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
- Jul 2 21:29:55 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.8331.09c220bd wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
- Jul 2 21:29:55 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: 431c73e1-feab-4f4a-b34d-5da097144e67 (0)
- Jul 2 21:29:58 cluster1 rsyslogd-2177: imuxsock lost 143 messages from pid 14829 due to rate-limiting
- Jul 2 21:30:04 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:30:15 cluster1 lrmd[16340]: warning: child_timeout_callback: FirewallVMDisk_promote_0 process (PID 8278) timed out
- Jul 2 21:30:15 cluster1 lrmd[16340]: warning: operation_finished: FirewallVMDisk_promote_0:8278 - timed out after 20000ms
- Jul 2 21:30:15 cluster1 crmd[16343]: error: process_lrm_event: LRM operation FirewallVMDisk_promote_0 (100) Timed Out (timeout=20000ms)
- Jul 2 21:30:25 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:30:45 cluster1 crmd[16343]: warning: cib_rsc_callback: Resource update 39 failed: (rc=-62) Timer expired
- Jul 2 21:30:46 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:31:07 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:31:26 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:31:35 cluster1 corosync[14829]: [TOTEM ] A processor failed, forming new configuration.
- Jul 2 21:31:47 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:32:09 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:32:30 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:32:35 cluster1 corosync[14829]: [QUORUM] Members[1]: 1
- Jul 2 21:32:35 cluster1 corosync[14829]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Jul 2 21:32:35 cluster1 kernel: dlm: closing connection to node 2
- Jul 2 21:32:35 cluster1 crmd[16343]: notice: crm_update_peer_state: cman_event_callback: Node cluster2.verolengo.privatelan[2] - state is now lost (was member)
- Jul 2 21:32:35 cluster1 crmd[16343]: warning: reap_dead_nodes: Our DC node (cluster2.verolengo.privatelan) left the cluster
- Jul 2 21:32:35 cluster1 corosync[14829]: [CPG ] chosen downlist: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
- Jul 2 21:32:35 cluster1 corosync[14829]: [MAIN ] Completed service synchronization, ready to provide service.
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 crmd[16343]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: error: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.8331@cluster1.verolengo.privatelan.431c73e1: Timer expired
- Jul 2 21:32:35 cluster1 crmd[16343]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
- Jul 2 21:32:35 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:32:35 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was not terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: Timer expired (ref=431c73e1-feab-4f4a-b34d-5da097144e67) by client stonith_admin.cman.8331
- Jul 2 21:32:35 cluster1 fence_pcmk[8330]: Call to fence cluster2.verolengo.privatelan (reset) failed with rc=194
- Jul 2 21:32:35 cluster1 fence_node[8314]: fence cluster2.verolengo.privatelan failed
- Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm exit code 1 (0x100)
- Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: fence-peer helper broken, returned 1
- Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm
- Jul 2 21:32:35 cluster1 rhcs_fence: Attempting to fence peer using RHCS from DRBD...
- Jul 2 21:32:35 cluster1 fence_pcmk[14463]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
- Jul 2 21:32:35 cluster1 stonith_admin[14464]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.14464.c3d3c0ba wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: f8cd3a1a-31eb-4f72-b4bc-e5d3ff2df420 (0)
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:37 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.19 -> 0.23.20 from local not applied to 0.23.19: Failed application of an update diff
- Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (1000)
- Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:38 cluster1 rsyslogd-2177: imuxsock lost 300 messages from pid 14829 due to rate-limiting
- Jul 2 21:32:38 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: pe_fence_node: Node cluster2.verolengo.privatelan will be fenced because the node is no longer part of the cluster
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: determine_online_status: Node cluster2.verolengo.privatelan is unclean
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster2.verolengo.privatelan: unknown error (1)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:1 on cluster1.verolengo.privatelan: unknown error (1)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action ilocluster1_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: stage6: Scheduling Node cluster2.verolengo.privatelan for STONITH
- Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Move ilocluster1#011(Started cluster2.verolengo.privatelan -> cluster1.verolengo.privatelan)
- Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Stop FirewallVMDisk:0#011(cluster2.verolengo.privatelan)
- Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Demote FirewallVMDisk:1#011(Master -> Slave cluster1.verolengo.privatelan)
- Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Recover FirewallVMDisk:1#011(Master cluster1.verolengo.privatelan)
- Jul 2 21:32:38 cluster1 pengine[16342]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-0.bz2
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_fence_node: Executing off fencing operation (43) on cluster2.verolengo.privatelan (timeout=60000)
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 52: notify FirewallVMDisk_pre_notify_demote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: handle_request: Client crmd.16343.9d15f38a wants to fence (off) 'cluster2.verolengo.privatelan' with device '(any)'
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation off for cluster2.verolengo.privatelan: 01c5ed1f-c015-4109-879b-ac87bf1efe8c (0)
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=103, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 15: demote FirewallVMDisk_demote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_demote_0 (call=106, rc=0, cib-update=57, confirmed=true) ok
- Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 53: notify FirewallVMDisk_post_notify_demote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:39 cluster1 crm_node[14545]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:32:39 cluster1 crm_attribute[14546]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:32:39 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=109, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:32:39 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 51: notify FirewallVMDisk_pre_notify_stop_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:39 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=112, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:32:39 cluster1 stonith-ng[16339]: error: log_operation: Operation 'reboot' [14469] (call 2 from stonith_admin.cman.14464) for host 'cluster2.verolengo.privatelan' with device 'pdu1' returned: 1 (Operation not permitted). Trying: ilocluster2
- Jul 2 21:32:39 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14469 [ Parse error: Ignoring unknown option 'nodename=cluster2.verolengo.privatelan' ]
- Jul 2 21:32:39 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14469 [ Failed: Unable to obtain correct plug status or plug is not available ]
- Jul 2 21:32:42 cluster1 stonith-ng[16339]: error: log_operation: Operation 'off' [14581] (call 2 from crmd.16343) for host 'cluster2.verolengo.privatelan' with device 'pdu1' returned: 1 (Operation not permitted). Trying: ilocluster2
- Jul 2 21:32:42 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14581 [ Parse error: Ignoring unknown option 'nodename=cluster2.verolengo.privatelan' ]
- Jul 2 21:32:42 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14581 [ Failed: Unable to obtain correct plug status or plug is not available ]
- Jul 2 21:32:42 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:32:44 cluster1 rsyslogd-2177: imuxsock lost 42 messages from pid 14829 due to rate-limiting
- Jul 2 21:32:48 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:32:49 cluster1 kernel: netxen_nic: eth1 NIC Link is down
- Jul 2 21:32:49 cluster1 kernel: bonding: bond2: link status definitely down for interface eth1, disabling it
- Jul 2 21:32:50 cluster1 kernel: e1000e: eth3 NIC Link is Down
- Jul 2 21:32:50 cluster1 kernel: e1000e: eth4 NIC Link is Down
- Jul 2 21:32:50 cluster1 kernel: bonding: bond1: link status definitely down for interface eth3, disabling it
- Jul 2 21:32:50 cluster1 kernel: bonding: bond1: now running without any active interface !
- Jul 2 21:32:50 cluster1 kernel: bonding: bond1: link status definitely down for interface eth4, disabling it
- Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: log_operation: Operation 'reboot' [14578] (call 2 from stonith_admin.cman.14464) for host 'cluster2.verolengo.privatelan' with device 'ilocluster2' returned: 0 (OK)
- Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.14464@cluster1.verolengo.privatelan.f8cd3a1a: OK
- Jul 2 21:32:50 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: OK (ref=f8cd3a1a-31eb-4f72-b4bc-e5d3ff2df420) by client stonith_admin.cman.14464
- Jul 2 21:32:50 cluster1 crmd[16343]: notice: tengine_stonith_notify: Notified CMAN that 'cluster2.verolengo.privatelan' is now fenced
- Jul 2 21:32:50 cluster1 rsyslogd-2177: imuxsock lost 27 messages from pid 14829 due to rate-limiting
- Jul 2 21:32:50 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.25 -> 0.23.26 from local not applied to 0.23.25: Failed application of an update diff
- Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:50 cluster1 fence_node[14447]: fence cluster2.verolengo.privatelan success
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm exit code 7 (0x700)
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: fence-peer helper returned 7 (peer was stonithed)
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: pdsk( DUnknown -> Outdated )
- Jul 2 21:32:50 cluster1 kernel: block drbd0: role( Secondary -> Primary )
- Jul 2 21:32:50 cluster1 kernel: block drbd0: new current UUID 0BCECDF42C97B34F:C8E866487C11E78E:B67C3AAE2BFFED00:0000000000000004
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> WFReportParams )
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Starting asender thread (from drbd_r_firewall [7677])
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: meta connection shut down by peer.
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( WFReportParams -> NetworkFailure )
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: asender terminated
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Terminating drbd_a_firewall
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Connection closed
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( NetworkFailure -> Unconnected )
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: receiver terminated
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Restarting receiver thread
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: receiver (re)started
- Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
- Jul 2 21:32:50 cluster1 kernel: netxen_nic: eth0 NIC Link is down
- Jul 2 21:32:50 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
- Jul 2 21:32:51 cluster1 kernel: bonding: bond2: link status definitely down for interface eth0, disabling it
- Jul 2 21:32:51 cluster1 kernel: bonding: bond2: now running without any active interface !
- Jul 2 21:32:53 cluster1 cmanicd: Entering iml_log_link_down(slot: 5, port: 2)
- Jul 2 21:32:53 cluster1 cmanicd: Entering log_iml_event(slot: 5, port: 2, code: (Down,2))
- Jul 2 21:32:53 cluster1 cmanicd: Entering get_event_id(slot: 5, port: 2
- Jul 2 21:32:53 cluster1 cmanicd: Existing event id(29) found for the slot and port.
- Jul 2 21:32:53 cluster1 cmanicd: Entering read_iml_event(slot: 5, port: 2, eventid: 29)
- Jul 2 21:32:53 cluster1 cmanicd: Calling ioctl() to read event id: 29)
- Jul 2 21:32:53 cluster1 cmanicd: Successfully read the event id: 29)
- Jul 2 21:32:53 cluster1 cmanicd: Trying to modify the existing IML Event.
- Jul 2 21:32:53 cluster1 cmanicd: Successfully updated the existing IML Event.
- Jul 2 21:32:53 cluster1 cmanicd: Returning from log_iml_event().
- Jul 2 21:32:54 cluster1 cmanicd: Entering iml_log_link_down(slot: 6, port: 2)
- Jul 2 21:32:54 cluster1 cmanicd: Entering log_iml_event(slot: 6, port: 2, code: (Down,2))
- Jul 2 21:32:54 cluster1 cmanicd: Entering get_event_id(slot: 6, port: 2
- Jul 2 21:32:54 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:32:54 cluster1 cmanicd: Existing event id(27) found for the slot and port.
- Jul 2 21:32:54 cluster1 cmanicd: Entering read_iml_event(slot: 6, port: 2, eventid: 27)
- Jul 2 21:32:54 cluster1 cmanicd: Calling ioctl() to read event id: 27)
- Jul 2 21:32:54 cluster1 cmanicd: Successfully read the event id: 27)
- Jul 2 21:32:54 cluster1 cmanicd: Trying to modify the existing IML Event.
- Jul 2 21:32:54 cluster1 cmanicd: Successfully updated the existing IML Event.
- Jul 2 21:32:54 cluster1 cmanicd: Returning from log_iml_event().
- Jul 2 21:32:54 cluster1 stonith-ng[16339]: notice: log_operation: Operation 'off' [15566] (call 2 from crmd.16343) for host 'cluster2.verolengo.privatelan' with device 'ilocluster2' returned: 0 (OK)
- Jul 2 21:32:54 cluster1 stonith-ng[16339]: notice: remote_op_done: Operation off of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for crmd.16343@cluster1.verolengo.privatelan.01c5ed1f: OK
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_callback: Stonith operation 2/43:0:0:e00f9cce-2413-4314-aa71-d67a9a71ebc8: OK (0)
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was terminated (off) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: OK (ref=01c5ed1f-c015-4109-879b-ac87bf1efe8c) by client crmd.16343
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_notify: Notified CMAN that 'cluster2.verolengo.privatelan' is now fenced
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: run_graph: Transition 0 (Complete=15, Pending=0, Fired=0, Skipped=13, Incomplete=7, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
- Jul 2 21:32:55 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:32:55 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
- Jul 2 21:32:55 cluster1 pengine[16342]: notice: LogActions: Start ilocluster1#011(cluster1.verolengo.privatelan)
- Jul 2 21:32:55 cluster1 pengine[16342]: notice: LogActions: Recover FirewallVMDisk:0#011(Slave cluster1.verolengo.privatelan)
- Jul 2 21:32:55 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-0.bz2
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 9: start ilocluster1_start_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 46: notify FirewallVMDisk_pre_notify_stop_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:55 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster1' already existed in device list (3 active devices)
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=117, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 1: stop FirewallVMDisk_stop_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:32:55 cluster1 drbd(FirewallVMDisk)[17894]: WARNING: firewall_vm still Primary, demoting.
- Jul 2 21:32:55 cluster1 kernel: block drbd0: role( Primary -> Secondary )
- Jul 2 21:32:55 cluster1 kernel: block drbd0: bitmap WRITE of 0 pages took 0 jiffies
- Jul 2 21:32:55 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> Disconnecting )
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Discarding network configuration.
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Connection closed
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: conn( Disconnecting -> StandAlone )
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: receiver terminated
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Terminating drbd_r_firewall
- Jul 2 21:32:55 cluster1 kernel: block drbd0: disk( UpToDate -> Failed )
- Jul 2 21:32:55 cluster1 kernel: block drbd0: bitmap WRITE of 0 pages took 0 jiffies
- Jul 2 21:32:55 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Jul 2 21:32:55 cluster1 kernel: block drbd0: disk( Failed -> Diskless )
- Jul 2 21:32:55 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 0
- Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Terminating drbd_w_firewall
- Jul 2 21:32:55 cluster1 udevd-work[3677]: error opening ATTR{/sys/devices/virtual/block/drbd0/queue/iosched/slice_idle} for writing: No such file or directory
- Jul 2 21:32:55 cluster1 udevd-work[3677]: error opening ATTR{/sys/devices/virtual/block/drbd0/queue/iosched/quantum} for writing: No such file or directory
- Jul 2 21:32:56 cluster1 cmanicd: Entering iml_log_link_down(slot: 6, port: 3)
- Jul 2 21:32:56 cluster1 cmanicd: Entering log_iml_event(slot: 6, port: 3, code: (Down,2))
- Jul 2 21:32:56 cluster1 cmanicd: Entering get_event_id(slot: 6, port: 3
- Jul 2 21:32:56 cluster1 crm_node[18699]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:32:56 cluster1 rsyslogd-2177: imuxsock lost 102 messages from pid 14829 due to rate-limiting
- Jul 2 21:32:56 cluster1 crm_attribute[18700]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:32:56 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (<null>)
- Jul 2 21:32:56 cluster1 attrd[16341]: notice: attrd_perform_update: Sent delete 209: node=cluster1.verolengo.privatelan, attr=master-FirewallVMDisk, id=<n/a>, set=(null), section=status
- Jul 2 21:32:56 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.27 -> 0.23.28 from local not applied to 0.23.27: Failed application of an update diff
- Jul 2 21:32:56 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 2 21:32:56 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_stop_0 (call=120, rc=0, cib-update=68, confirmed=true) ok
- Jul 2 21:32:56 cluster1 cmanicd: Existing event id(28) found for the slot and port.
- Jul 2 21:32:56 cluster1 cmanicd: Entering read_iml_event(slot: 6, port: 3, eventid: 28)
- Jul 2 21:32:56 cluster1 cmanicd: Calling ioctl() to read event id: 28)
- Jul 2 21:32:56 cluster1 cmanicd: Successfully read the event id: 28)
- Jul 2 21:32:56 cluster1 cmanicd: Trying to modify the existing IML Event.
- Jul 2 21:32:56 cluster1 cmanicd: Successfully updated the existing IML Event.
- Jul 2 21:32:56 cluster1 cmanicd: Returning from log_iml_event().
- Jul 2 21:33:00 cluster1 cmanicd: Entering iml_log_link_down(slot: 5, port: 1)
- Jul 2 21:33:00 cluster1 cmanicd: Entering log_iml_event(slot: 5, port: 1, code: (Down,2))
- Jul 2 21:33:00 cluster1 cmanicd: Entering get_event_id(slot: 5, port: 1
- Jul 2 21:33:00 cluster1 cmanicd: Existing event id(30) found for the slot and port.
- Jul 2 21:33:00 cluster1 cmanicd: Entering read_iml_event(slot: 5, port: 1, eventid: 30)
- Jul 2 21:33:00 cluster1 cmanicd: Calling ioctl() to read event id: 30)
- Jul 2 21:33:00 cluster1 cmanicd: Successfully read the event id: 30)
- Jul 2 21:33:00 cluster1 cmanicd: Trying to modify the existing IML Event.
- Jul 2 21:33:00 cluster1 cmanicd: Successfully updated the existing IML Event.
- Jul 2 21:33:00 cluster1 cmanicd: Returning from log_iml_event().
- Jul 2 21:33:00 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation ilocluster1_start_0 (call=115, rc=0, cib-update=69, confirmed=true) ok
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: run_graph: Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=7, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Stopped
- Jul 2 21:33:01 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:33:01 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
- Jul 2 21:33:01 cluster1 pengine[16342]: notice: LogActions: Start FirewallVMDisk:0#011(cluster1.verolengo.privatelan)
- Jul 2 21:33:01 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-1.bz2
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 10: monitor ilocluster1_monitor_60000 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 13: start FirewallVMDisk_start_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Starting worker thread (from drbdsetup-84 [19730])
- Jul 2 21:33:01 cluster1 kernel: block drbd0: disk( Diskless -> Attaching )
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Method to ensure write ordering: drain
- Jul 2 21:33:01 cluster1 kernel: block drbd0: max BIO size = 1048576
- Jul 2 21:33:01 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 104854328
- Jul 2 21:33:01 cluster1 kernel: block drbd0: resync bitmap: bits=13106791 words=204794 pages=400
- Jul 2 21:33:01 cluster1 kernel: block drbd0: size = 50 GB (52427164 KB)
- Jul 2 21:33:01 cluster1 kernel: block drbd0: recounting of set bits took additional 3 jiffies
- Jul 2 21:33:01 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Jul 2 21:33:01 cluster1 kernel: block drbd0: disk( Attaching -> UpToDate ) pdsk( DUnknown -> Outdated )
- Jul 2 21:33:01 cluster1 kernel: block drbd0: attached to UUIDs 0BCECDF42C97B34F:C8E866487C11E78E:B67C3AAE2BFFED00:0000000000000004
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: conn( StandAlone -> Unconnected )
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Starting receiver thread (from drbd_w_firewall [19732])
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: receiver (re)started
- Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
- Jul 2 21:33:01 cluster1 crm_node[19771]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:33:01 cluster1 crm_attribute[19772]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:33:01 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (10000)
- Jul 2 21:33:01 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 213: master-FirewallVMDisk=10000
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_start_0 (call=126, rc=0, cib-update=71, confirmed=true) ok
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 45: notify FirewallVMDisk_post_notify_start_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=129, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:33:02 cluster1 rsyslogd-2177: imuxsock lost 57 messages from pid 14829 due to rate-limiting
- Jul 2 21:33:05 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation ilocluster1_monitor_60000 (call=124, rc=0, cib-update=72, confirmed=false) ok
- Jul 2 21:33:05 cluster1 crmd[16343]: notice: run_graph: Transition 2 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Stopped
- Jul 2 21:33:05 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 2 21:33:05 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
- Jul 2 21:33:05 cluster1 fenced[14883]: fencing node cluster2.verolengo.privatelan
- Jul 2 21:33:05 cluster1 pengine[16342]: notice: LogActions: Promote FirewallVMDisk:0#011(Slave -> Master cluster1.verolengo.privatelan)
- Jul 2 21:33:05 cluster1 pengine[16342]: notice: LogActions: Start FirewallVM#011(cluster1.verolengo.privatelan)
- Jul 2 21:33:05 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-2.bz2
- Jul 2 21:33:05 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 51: notify FirewallVMDisk_pre_notify_promote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:05 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=133, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:33:05 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 15: promote FirewallVMDisk_promote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:05 cluster1 fence_pcmk[19867]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
- Jul 2 21:33:05 cluster1 stonith_admin[19882]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.19882.f3cc2071 wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: 33333650-fe63-4c6e-9752-9436019a3f34 (0)
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 2 21:33:05 cluster1 kernel: block drbd0: role( Secondary -> Primary )
- Jul 2 21:33:05 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
- Jul 2 21:33:06 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_promote_0 (call=136, rc=0, cib-update=74, confirmed=true) ok
- Jul 2 21:33:06 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 52: notify FirewallVMDisk_post_notify_promote_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:06 cluster1 crm_node[19943]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:33:06 cluster1 crm_attribute[19944]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
- Jul 2 21:33:06 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=139, rc=0, cib-update=0, confirmed=true) ok
- Jul 2 21:33:06 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 40: start FirewallVM_start_0 on cluster1.verolengo.privatelan (local)
- Jul 2 21:33:06 cluster1 VirtualDomain(FirewallVM)[19953]: INFO: Domain name "firewall" saved to /var/run/resource-agents/VirtualDomain-FirewallVM.state.
- Jul 2 21:33:06 cluster1 auditd[1791]: Audit daemon rotating log files
- Jul 2 21:33:06 cluster1 kernel: device vnet0 entered promiscuous mode
- Jul 2 21:33:06 cluster1 kernel: lan: port 2(vnet0) entering forwarding state
- /var/log/cluster/fenced.log:
- Jul 02 21:32:35 fenced cluster node 2 removed seq 12
- Jul 02 21:32:35 fenced fenced:daemon conf 1 0 1 memb 1 join left 2
- Jul 02 21:32:35 fenced fenced:daemon ring 1:12 1 memb 1
- Jul 02 21:32:35 fenced fenced:default conf 1 0 1 memb 1 join left 2
- Jul 02 21:32:35 fenced add_change cg 3 remove nodeid 2 reason 3
- Jul 02 21:32:35 fenced add_change cg 3 m 1 j 0 r 1 f 1
- Jul 02 21:32:35 fenced add_victims node 2
- Jul 02 21:32:35 fenced check_ringid cluster 12 cpg 1:8
- Jul 02 21:32:35 fenced fenced:default ring 1:12 1 memb 1
- Jul 02 21:32:35 fenced check_ringid done cluster 12 cpg 1:12
- Jul 02 21:32:35 fenced check_quorum done
- Jul 02 21:32:35 fenced send_start 1:3 flags 2 started 2 m 1 j 0 r 1 f 1
- Jul 02 21:32:35 fenced receive_start 1:3 len 152
- Jul 02 21:32:35 fenced match_change 1:3 matches cg 3
- Jul 02 21:32:35 fenced wait_messages cg 3 got all 1
- Jul 02 21:32:35 fenced set_master from 1 to complete node 1
- Jul 02 21:32:35 fenced delay post_fail_delay 30 quorate_from_last_update 0
- Jul 02 21:33:05 fenced delay of 30s leaves 1 victims
- Jul 02 21:33:05 fenced cluster2.verolengo.privatelan not a cluster member after 30 sec post_fail_delay
- Jul 02 21:33:05 fenced fencing node cluster2.verolengo.privatelan
- Jul 02 21:33:10 fenced fence cluster2.verolengo.privatelan dev 0.0 agent fence_pcmk result: success
- Jul 02 21:33:10 fenced fence cluster2.verolengo.privatelan success
- Jul 02 21:33:10 fenced send_victim_done cg 3 flags 2 victim nodeid 2
- Jul 02 21:33:10 fenced send_complete 1:3 flags 2 started 2 m 1 j 0 r 1 f 1
- Jul 02 21:33:10 fenced receive_victim_done 1:3 flags 2 len 80
- Jul 02 21:33:10 fenced receive_victim_done 1:3 remove victim 2 time 1404329590 how 1
- Jul 02 21:33:10 fenced receive_complete 1:3 len 152
- Jul 02 21:33:10 fenced client connection 3 fd 18
- Jul 02 21:33:10 fenced client connection 5 fd 19
- Jul 02 21:33:10 fenced send_external victim nodeid 2
- Jul 02 21:33:10 fenced client connection 3 fd 18
- Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
- Jul 02 21:33:10 fenced send_external victim nodeid 2
- Jul 02 21:33:10 fenced client connection 5 fd 19
- Jul 02 21:33:10 fenced send_external victim nodeid 2
- Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
- Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
- Jul 02 21:33:10 fenced send_external victim nodeid 2
- Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
- Jul 02 21:54:43 fenced cluster node 2 added seq 16
- Jul 02 21:54:43 fenced fenced:daemon conf 2 1 0 memb 1 2 join 2 left
- Jul 02 21:54:43 fenced cpg_mcast_joined retried 1 protocol
- Jul 02 21:54:43 fenced fenced:daemon ring 1:16 2 memb 1 2
- Jul 02 21:54:43 fenced fenced:default ring 1:16 2 memb 1 2
- Jul 02 21:54:43 fenced receive_protocol from 2 max 1.1.1.0 run 1.1.1.0
- Jul 02 21:54:43 fenced daemon node 2 max 0.0.0.0 run 0.0.0.0
- Jul 02 21:54:43 fenced daemon node 2 join 1404330883 left 1404329555 local quoru
- m 1404318281
- Jul 02 21:54:43 fenced receive_protocol from 1 max 1.1.1.0 run 1.1.1.1
- Jul 02 21:54:43 fenced daemon node 1 max 1.1.1.0 run 1.1.1.0
- Jul 02 21:54:43 fenced daemon node 1 join 1404318281 left 0 local quorum 1404318
- 281
- Jul 02 21:54:43 fenced fenced:default conf 2 1 0 memb 1 2 join 2 left
- Jul 02 21:54:43 fenced add_change cg 4 joined nodeid 2
- Jul 02 21:54:43 fenced add_change cg 4 m 2 j 1 r 0 f 0
- Jul 02 21:54:43 fenced check_ringid done cluster 16 cpg 1:16
- Jul 02 21:54:43 fenced check_quorum done
- Jul 02 21:54:43 fenced send_start 1:4 flags 2 started 3 m 2 j 1 r 0 f 0
- Jul 02 21:54:43 fenced receive_start 2:1 len 152
- Jul 02 21:54:43 fenced match_change 2:1 matches cg 4
- Jul 02 21:54:43 fenced wait_messages cg 4 need 1 of 2
- Jul 02 21:54:43 fenced receive_start 1:4 len 152
- Jul 02 21:54:43 fenced match_change 1:4 matches cg 4
- Jul 02 21:54:43 fenced wait_messages cg 4 got all 2
- Jul 02 21:54:43 fenced set_master from 1 to complete node 1
- Jul 02 21:54:43 fenced send_complete 1:4 flags 2 started 3 m 2 j 1 r 0 f 0
- Jul 02 21:54:43 fenced receive_complete 1:4 len 152
- /var/log/cluster/corosync.log:
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7437 id=7f2061a5-3caa-4744-a9b0-fc2533bee520
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7437-14)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7437]
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_replace operation for section 'all' to master (origin=local/cibadmin/2)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_uptime: Current CPU usage is: 0s, 217966us
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_compare_age: Loose: 0.217966 vs 0.384941 (usec)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: do_election_count_vote: Election 7 (owner: cluster2.verolengo.privatelan) lost: vote from cluster2.verolengo.privatelan (Uptime)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Unset DC. Was cluster2.verolengo.privatelan
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Ignore election check: we not in an election
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=43
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected b42d1e4a9ade0eb56291c5740d365ab5, calculated 35caf72644b4a1832f5814efb7073232
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.14.5 -> 0.21.1 from cluster2.verolengo.privatelan not applied to 0.14.5: Failed application of an update diff
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/34, version=0.14.5)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7437-14)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7437-14) state:2
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7437-14-header
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7437-14-header
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7437-14-header
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.1 -> 0.21.2 (sync in progress)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 05e4243a540acefb01413901a436766c
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.14.5 with 0.21.2 from cluster2.verolengo.privatelan
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.14.5 -> 0.21.2 from cluster2.verolengo.privatelan
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.2)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.14.5
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.2 05e4243a540acefb01413901a436766c
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib epoch="14" num_updates="5" admin_epoch="0">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_update_resource" id="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_update_resource" id="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:34 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="2" update-client="cluster1.verolengo.privatelan" update-origin="cluster2.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <configuration>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <resources>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <master id="FirewallVMDiskClone">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <primitive class="ocf" id="FirewallVMDisk" provider="linbit" type="drbd">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <instance_attributes id="FirewallVMDisk-instance_attributes">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDisk-instance_attributes-drbd_resource" name="drbd_resource" value="firewall_vm"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </instance_attributes>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <operations>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <op id="FirewallVMDisk-monitor-interval-60s" interval="60s" name="monitor"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </operations>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </primitive>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <meta_attributes id="FirewallVMDiskClone-meta_attributes">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-master-max" name="master-max" value="1"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-master-node-max" name="master-node-max" value="1"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-clone-max" name="clone-max" value="2"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-notify" name="notify" value="true"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-target-role" name="target-role" value="Started"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-is-managed" name="is-managed" value="true"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </meta_attributes>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </master>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <primitive class="ocf" id="FirewallVM" provider="heartbeat" type="VirtualDomain">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <instance_attributes id="FirewallVM-instance_attributes">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-config" name="config" value="/etc/libvirt/qemu/firewall.xml"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-migration_transport" name="migration_transport" value="tcp"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-migration_network_suffix" name="migration_network_suffix" value="-10g"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-hypervisor" name="hypervisor" value="qemu:///system"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </instance_attributes>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <operations>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <op id="FirewallVM-monitor-interval-60s" interval="60s" name="monitor" timeout="120s"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </operations>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <meta_attributes id="FirewallVM-meta_attributes">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-allow-migrate" name="allow-migrate" value="false"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-target-role" name="target-role" value="Started"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-is-managed" name="is-managed" value="true"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </meta_attributes>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </primitive>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </resources>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <constraints>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_order first="FirewallVMDiskClone" first-action="promote" id="order-FirewallVMDiskClone-FirewallVM-mandatory" then="FirewallVM" then-action="start"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_location id="location-FirewallVM" rsc="FirewallVM">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rule id="location-FirewallVM-rule" role="master" score="50">
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <expression attribute="#uname" id="location-FirewallVM-rule-expr" operation="eq" value="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </rule>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </rsc_location>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </constraints>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </configuration>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_cib_replaced" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_cib_replaced" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 05e4243a540acefb01413901a436766c, calculated 7ecb35e3adfb1dfe6e0b89973326c17d
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.14.5 -> 0.21.2 from local not applied to 0.14.5: Failed application of an update diff
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xc8f6a0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_OFFER: join-6
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Set DC to cluster2.verolengo.privatelan (3.0.7)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/4, version=0.21.2)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/35, version=0.21.2)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Respond to join offer join-6
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Acknowledging cluster2.verolengo.privatelan as our DC
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/146, version=0.21.2)
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/147, version=0.21.2)
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/148, version=0.21.2)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: update_cib_stonith_devices: Updating device list from the cib: new resource
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/149, version=0.21.2)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/150, version=0.21.2)
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-18.raw
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: native_rsc_location: Constraint (location-FirewallVM-rule) is not active (role : Master vs. Unknown)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device pdu1 is allowed on cluster1.verolengo.privatelan: score=0
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/151, version=0.21.2)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: build_port_aliases: Adding alias '6,cluster2.verolengo.privatelan'='7'
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/152)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/153, version=0.21.2)
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 128e3a63756a46bf854111d9cd00dfa0)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 128e3a63756a46bf854111d9cd00dfa0 to disk
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.IolctF (digest: /var/lib/pacemaker/cib/cib.IFvyyX)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.IolctF
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 05e4243a540acefb01413901a436766c
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.2 with 0.21.2 from cluster2.verolengo.privatelan
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/192, version=0.21.2)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_RESULT: join-6
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: Confirming join join-6: join_ack_nack
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster1 after monitor op complete (interval=0)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after start op complete (interval=0)
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 152 for probe_complete=true passed
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-19.raw
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: ae291173e3889308a47af0b4e483e71e)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest ae291173e3889308a47af0b4e483e71e to disk
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.MfJ2tF (digest: /var/lib/pacemaker/cib/cib.4MLfAX)
- Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.MfJ2tF
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_device_remove: Removed 'ilocluster1' from the device list (2 active devices)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster1 is allowed on cluster1.verolengo.privatelan: score=0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster2 is allowed on cluster1.verolengo.privatelan: score=0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after monitor op complete (interval=60000)
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource pdu1 after start op complete (interval=0)
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected c907b826d5889d153c0bf9c87584e575, calculated ebb1cd7477ccf2125b2b70c0a4d4c7c4
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.21.2 -> 0.21.3 from cluster2.verolengo.privatelan not applied to 0.21.2: Failed application of an update diff
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.3 -> 0.21.4 (sync in progress)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 5ff8babd23b657f0fc9c197dd9785eee
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.2 with 0.21.4 from cluster2.verolengo.privatelan
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.21.2 -> 0.21.4 from cluster2.verolengo.privatelan
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/154, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.2
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.4 5ff8babd23b657f0fc9c197dd9785eee
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="2">
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_cib_replaced" id="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="4" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_lrm_query_internal" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/155, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/156, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/157, version=0.21.4)
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 5ff8babd23b657f0fc9c197dd9785eee, calculated b016b615a6579482c48b722d79c29a94
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.21.2 -> 0.21.4 from local not applied to 0.21.2: Failed application of an update diff
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xbff710
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/158, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
- Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/159, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.21.4)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/160)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-20.raw
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/161, version=0.21.4)
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 5bc617c273b258d284f9094db00da144)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 5bc617c273b258d284f9094db00da144 to disk
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.9ykBDH (digest: /var/lib/pacemaker/cib/cib.Y4pmT1)
- Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.9ykBDH
- Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 160 for probe_complete=true passed
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 92
- Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 92
- Jul 02 21:29:36 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
- Jul 02 21:29:36 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 576
- Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 92
- Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 92
- Jul 02 21:29:36 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
- Jul 02 21:29:36 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 576
- Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource pdu1 after monitor op complete (interval=60000)
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: join-6: Join complete. Sending local LRM status to cluster2.verolengo.privatelan
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/162, version=0.21.4)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/163)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected 8ba4bd1fcf884c4d284d1e0c5cfeaf8a, calculated eae8139bf51884dc2056be813b817c73
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.21.4 -> 0.21.5 from cluster2.verolengo.privatelan not applied to 0.21.4: Failed application of an update diff
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 163 for probe_complete=true passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.5 -> 0.21.6 (sync in progress)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 95ac07c19ee48d454e3be3a2037a97d2
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.4 with 0.21.6 from cluster2.verolengo.privatelan
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.21.4 -> 0.21.6 from cluster2.verolengo.privatelan
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.6)
- Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.4
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.6 95ac07c19ee48d454e3be3a2037a97d2
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="4">
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_cib_replaced" id="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="6" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_lrm_query_internal" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 95ac07c19ee48d454e3be3a2037a97d2, calculated 0ab4e21fca19df035231f64325d55068
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.21.4 -> 0.21.6 from local not applied to 0.21.4: Failed application of an update diff
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xc692d0
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/200, version=0.21.7)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.21.7)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/164, version=0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/165, version=0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/166, version=0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/167, version=0.21.7)
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.6
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.7 aed76545c3632e0c41161416395800bf
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="6">
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_lrm_query_internal" id="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_lrm_query_internal" id="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="7" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_state_transition" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_state_transition" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_process_diff: Diff 0.21.6 -> 0.21.7 from local not applied to 0.21.7: current "num_updates" is greater than required
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) (nil)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/168, version=0.21.7)
- Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/7, version=0.21.7)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-21.raw
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/169, version=0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/170)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']
- //transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 53c9f22b43e74eb
- 8824fa08c90f1c438)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_sta
- te[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/171, version
- =0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 53c9f22b43e74eb8824fa08c90f1c438 to disk
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.6cnWLJ (d
- igest: /var/lib/pacemaker/cib/cib.aew295)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verol
- engo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_sta
- te[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/172, version=0.21.7)
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_c
- omplete" name="probe_complete" value="true"/>
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (or
- igin=local/attrd/173)
- Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.6cnWLJ
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 170 for probe_complete=true passed
- Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 173 for probe_complete=true passed
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVMDisk' not found (3 active resources)
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
- c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVMDisk:1' not found (3 active resources)
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
- c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_rsc_register: Added 'FirewallVMDisk' to the rsc list (4 active resources)
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_register operation from 4791ee58-b80b-4b8b-ab
- a3-9c1bf565c6b3: rc=0, reply=1, notify=1, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
- c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:37 [16343] cluster1.verolengo.privatelan crmd: info: do_lrm_rsc_op: Performing key=7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf op=FirewallVM
- Disk_monitor_0
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 4791ee58-b80b-4b8b-aba3-9
- c1bf565c6b3: rc=59, reply=1, notify=0, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: log_execute: executing - rsc:FirewallVMDisk action:monitor call_id:59
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVM' not found (4 active resources)
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
- c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_rsc_register: Added 'FirewallVM' to the rsc list (5 active resources)
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_register operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=1, notify=1, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:37 [16343] cluster1.verolengo.privatelan crmd: info: do_lrm_rsc_op: Performing key=8:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf op=FirewallVM_monitor_0
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=63, reply=1, notify=0, exit=4201792
- Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: log_execute: executing - rsc:FirewallVM action:monitor call_id:63
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447 - exited with rc=7
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447:stderr [ -- empty -- ]
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447:stdout [ -- empty -- ]
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: log_finished: finished - rsc:FirewallVMDisk action:monitor call_id:59 pid:7447 exit-code:7 exec-time:107ms queue-time:0ms
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: do_update_resource: Updating resource FirewallVMDisk after monitor op complete (interval=0)
- Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 7482 exited with rc=0
- Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: notice: process_lrm_event: LRM operation FirewallVMDisk_monitor_0 (call=59, rc=7, cib-update=36, confirmed=true) not running
- Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: update_history_cache: Updating history for 'FirewallVMDisk' with monitor op
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/36)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/36, version=0.21.8)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.7
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.8 51eb1f7a384b4bdfc7122f1fa91340f7
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="7">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_state_transition" id="cluster1.verolengo.privatelan"/>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="8" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <node_state crm-debug-origin="do_update_resource" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm id="cluster1.verolengo.privatelan">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm_resources>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_resource id="FirewallVMDisk" type="drbd" class="ocf" provider="linbit">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_rsc_op id="FirewallVMDisk_last_0" operation_key="FirewallVMDisk_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" transition-magic="0:7;7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" call-id="59" rc-code="7" op-status="0" interval="0" last-run="1404329377" last-rc-change="1404329377" exec-time="1
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </lrm_resource>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm_resources>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </node_state>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- VirtualDomain(FirewallVM)[7448]: 2014/07/02_21:29:38 DEBUG: Virtual domain firewall is currently shut off.
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/204, version=0.21.9)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.8
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.9 6f2eb1952277bd4cf8fe08b23d60e602
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="8">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_state_transition" id="cluster2.verolengo.privatelan"/>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="9" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <node_state crm-debug-origin="do_update_resource" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm id="cluster2.verolengo.privatelan">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm_resources>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_resource id="FirewallVMDisk" type="drbd" class="ocf" provider="linbit">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_rsc_op id="FirewallVMDisk_last_0" operation_key="FirewallVMDisk_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="10:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" transition-magic="0:7;10:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" call-id="53" rc-code="7" op-status="0" interval="0" last-run="1404329376" last-rc-change="1404329376" exec-time=
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </lrm_resource>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm_resources>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </node_state>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7519 id=d8299391-583e-4188-9823-579b7cf339ca
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7519-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7519]
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.21.9)
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: dump_resource_attr: Looking up cpu in FirewallVM
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7519-14-header
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7519-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7519-14)
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7519-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7519-14) state:2
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: warning: main: Error performing operation: No such device or address
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7519-14-header
- Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7519-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7519-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7521 id=2eea3180-c533-4911-871b-7203ebbb7ea7
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7521-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7521]
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.21.9)
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="cpu"] does not exist
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="cpu"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.21.9)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.21.9)
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <primitive id="FirewallVM">
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <utilization id="FirewallVM-utilization">
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <nvpair id="FirewallVM-utilization-cpu" name="cpu" value="1"/>
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </utilization>
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </primitive>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section resources to master (origin=local/crm_resource/5)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=cluster2.verolengo.privatelan/crm_resource/5, version=0.22.1)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.9
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.22.1 02f47d524d7d6e1940f81bbceb545514
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <cib epoch="21" num_updates="9" admin_epoch="0"/>
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7521-14-header
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:38 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="22" have-quorum="1" num_updates="1" update-client="crm_resource" update-origin="cluster2.verolengo.privatelan" validate-with="pacemaker-1.2">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <configuration>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <resources>
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7521-14-header
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <primitive class="ocf" id="FirewallVM" provider="heartbeat" type="VirtualDomain">
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7521-14-header
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <utilization id="FirewallVM-utilization">
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-utilization-cpu" name="cpu" value="1"/>
- Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </utilization>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </primitive>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7521-14)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </resources>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7521-14) state:2
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </configuration>
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7521-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7521-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7521-14-header
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: update_cib_stonith_devices: Updating device list from the cib: new resource
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: native_rsc_location: Constraint (location-FirewallVM-rule) is not active (role : Master vs. Unknown)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-22.raw
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device pdu1 is allowed on cluster1.verolengo.privatelan: score=0
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: build_port_aliases: Adding alias '6,cluster2.verolengo.privatelan'='7'
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.22.0 of the CIB to disk (digest: 984a57adfc11c3f9f0bfc89feab27302)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 984a57adfc11c3f9f0bfc89feab27302 to disk
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ZuNewM (digest: /var/lib/pacemaker/cib/cib.HGeHEb)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.ZuNewM
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7529 id=3e2870c9-d691-4390-a960-a8554f85ea72
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7529-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7529]
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.22.1)
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: dump_resource_attr: Looking up hv_memory in FirewallVM
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7529-14-header
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7529-14-header
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7529-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7529-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7529-14) state:2
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: warning: main: Error performing operation: No such device or address
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7529-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7529-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7529-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7531 id=eb8a4d5e-3947-4b58-a051-619f4ae3d652
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7531-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7531]
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.22.1)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_device_remove: Removed 'ilocluster1' from the device list (2 active devices)
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster1 is allowed on cluster1.verolengo.privatelan: score=0
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
- Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="hv_memory"] does not exist
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="hv_memory"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.22.1)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.22.1)
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <primitive id="FirewallVM">
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <utilization id="FirewallVM-utilization">
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <nvpair id="FirewallVM-utilization-hv_memory" name="hv_memory" value="1024"/>
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </utilization>
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </primitive>
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section resources to master (origin=local/crm_resource/5)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=cluster2.verolengo.privatelan/crm_resource/5, version=0.23.1)
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7531-14-header
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7531-14-header
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7531-14-header
- Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7531-14)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7531-14) state:2
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7531-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7531-14-header
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7531-14-header
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448 - exited with rc=7
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448:stderr [ -- empty -- ]
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448:stdout [ -- empty -- ]
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: log_finished: finished - rsc:FirewallVM action:monitor call_id:63 pid:7448 exit-code:7 exec-time:347ms queue-time:0ms
- Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
- Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: do_update_resource: Updating resource FirewallVM after monitor op complete (interval=0)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-23.raw
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.23.0 of the CIB to disk (digest: 44fe45fef61c86411eb51e4314d1e883)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 44fe45fef61c86411eb51e4314d1e883 to disk
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.jrhLEM (digest: /var/lib/pacemaker/cib/cib.eIQIVb)
- Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.jrhLEM
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 90
- Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 90
- Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 440
- Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000090 to fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:29:38 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
- Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:29:38 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 91
- ...
- ...
- Jul 02 21:32:35 corosync [TOTEM ] entering GATHER state from 0.
- Jul 02 21:32:35 corosync [TOTEM ] Creating commit token because I am the rep.
- Jul 02 21:32:35 corosync [TOTEM ] Saving state aru 4f4 high seq received 4f4
- Jul 02 21:32:35 corosync [TOTEM ] Storing new sequence id for ring c
- Jul 02 21:32:35 corosync [TOTEM ] entering COMMIT state.
- Jul 02 21:32:35 corosync [TOTEM ] got commit token
- Jul 02 21:32:35 corosync [TOTEM ] entering RECOVERY state.
- Jul 02 21:32:35 corosync [TOTEM ] TRANS [0] member 172.16.100.1:
- Jul 02 21:32:35 corosync [TOTEM ] position [0] member 172.16.100.1:
- Jul 02 21:32:35 corosync [TOTEM ] previous ring seq 8 rep 172.16.100.1
- Jul 02 21:32:35 corosync [TOTEM ] aru 4f4 high delivered 4f4 received flag 1
- Jul 02 21:32:35 corosync [TOTEM ] Did not need to originate any messages in recovery.
- Jul 02 21:32:35 corosync [TOTEM ] got commit token
- Jul 02 21:32:35 corosync [TOTEM ] Sending initial ORF token
- Jul 02 21:32:35 corosync [TOTEM ] got commit token
- Jul 02 21:32:35 corosync [TOTEM ] Sending initial ORF token
- Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
- Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
- Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
- Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
- Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
- Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
- Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
- Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
- Jul 02 21:32:35 corosync [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
- Jul 02 21:32:35 corosync [TOTEM ] Resetting old ring state
- Jul 02 21:32:35 corosync [TOTEM ] recovery to regular 1-0
- Jul 02 21:32:35 corosync [CMAN ] ais: confchg_fn called type = 1, seq=12
- Jul 02 21:32:35 corosync [CMAN ] memb: del_ais_node 2
- Jul 02 21:32:35 corosync [CMAN ] memb: del_ais_node cluster2.verolengo.privatelan, leave_reason=1
- Jul 02 21:32:35 corosync [QUORUM] Members[1]: 1
- Jul 02 21:32:35 corosync [QUORUM] sending quorum notification to (nil), length = 52
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 23
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 25
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 30
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 39
- Jul 02 21:32:35 corosync [TOTEM ] waiting_trans_ack changed to 1
- Jul 02 21:32:35 corosync [CMAN ] ais: confchg_fn called type = 0, seq=12
- Jul 02 21:32:35 corosync [CMAN ] ais: last memb_count = 2, current = 1
- Jul 02 21:32:35 corosync [CMAN ] memb: sending TRANSITION message. cluster_name = vclu
- Jul 02 21:32:35 corosync [CMAN ] ais: comms send message 0x7fffb697d0c0 len = 65
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 103 to fd 39
- Jul 02 21:32:35 corosync [SYNC ] This node is within the primary component and will provide service.
- Jul 02 21:32:35 corosync [TOTEM ] entering OPERATIONAL state.
- Jul 02 21:32:35 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Jul 02 21:32:35 corosync [CMAN ] ais: deliver_fn source nodeid = 1, len=81, endian_conv=0
- Jul 02 21:32:35 corosync [CMAN ] memb: Message on port 0 is 5
- Jul 02 21:32:35 corosync [CMAN ] memb: got TRANSITION from node 1
- Jul 02 21:32:35 corosync [CMAN ] memb: Got TRANSITION message. msg->flags=20, node->flags=20, first_trans=0
- Jul 02 21:32:35 corosync [CMAN ] memb: add_ais_node ID=1, incarnation = 12
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy CLM service)
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 25
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 30
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 30
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 25
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: pcmk_cpg_membership: Left[2.0] stonith-ng.2
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 2
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: pcmk_cpg_membership: Left[2.0] cib.2
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: pcmk_cpg_membership: Member[2.0] cib.1
- Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: pcmk_cpg_membership: Left[2.0] attrd.2
- Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
- Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: pcmk_cpg_membership: Member[2.0] attrd.1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: cman_event_callback: Membership 12: quorum retained
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy CLM service)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: crm_update_peer_state: cman_event_callback: Node cluster2.verolengo.privatelan[2] - state is now lost (was member)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: peer_update_callback: cluster2.verolengo.privatelan is now lost (was member)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: post_cache_update: Updated cache after membership event 12.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy AMF service)
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 30
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: warning: reap_dead_nodes: Our DC node (cluster2.verolengo.privatelan) left the cluster
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 30
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 25
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy AMF service)
- Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (openais checkpoint service B.01.01)
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 39
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 39
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (openais checkpoint service B.01.01)
- Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy EVT service)
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 23
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy EVT service)
- Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (corosync cluster closed process group service v1.01)
- Jul 02 21:32:35 corosync [CPG ] comparing: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
- Jul 02 21:32:35 corosync [CPG ] chosen downlist: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
- Jul 02 21:32:35 corosync [CPG ] got joinlist message from node 0x1
- Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
- Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
- Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[0] group:crmd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16343
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[1] group:attrd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16341
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16339
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[3] group:cib\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16338
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[4] group:pacemakerd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16332
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[5] group:gfs:controld\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14955
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[6] group:dlm:controld\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14904
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[7] group:fenced:default\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14883
- Jul 02 21:32:35 corosync [CPG ] joinlist_messages[8] group:fenced:daemon\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14883
- Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (corosync cluster closed process group service v1.01)
- Jul 02 21:32:35 corosync [MAIN ] Completed service synchronization, ready to provide service.
- Jul 02 21:32:35 corosync [TOTEM ] waiting_trans_ack changed to 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_query 0 from cluster1.verolengo.privatelan ( 0)
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: create_remote_stonith_op: 431c73e1-feab-4f4a-b34d-5da097144e67 already exists
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_query: Query <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c73e1-feab-4f4a-b34d-5da097144e67" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="431c73e1-feab-4f4a-b34d-5da097144e67" st_target="cluster2.verolengo.privatelan" st_device_action="reboot" st_origin="cluster1.verolengo.privatelan" st_clientid="09c220bd-a4e4-4321-bb72-9c60c6d14bc3" st_clientname="stonith_admin.cman.8331" st
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: get_capable_devices: Searching through 3 devices to see what is capable of action (reboot) for target cluster2.verolengo.privatelan
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 23
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Unset DC. Was cluster2.verolengo.privatelan
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_uptime: Current CPU usage is: 0s, 253961us
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 39
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_vote: Started election 3
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: search_devices_record_result: Finished Search. 2 devices can perform action (reboot) on node cluster2.verolengo.privatelan
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=49
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_query_capable_device_cb: Found 2 matching devices for 'cluster2.verolengo.privatelan'
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Still waiting on 1 non-votes (1 total)
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: pcmk_cpg_membership: Left[2.0] crmd.2
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: peer_update_callback: Client cluster2.verolengo.privatelan/peer now has status [offline] (DC=<null>)
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_query from cluster1.verolengo.privatelan: OK (0)
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 39
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: pcmk_cpg_membership: Member[2.0] crmd.1
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_notify reply 0 from cluster1.verolengo.privatelan ( 0)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_count_vote: Created voted hash
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_count_vote: Election 3 (current: 3, owner: cluster1.verolengo.privatelan): Processed vote from cluster1.verolengo.privatelan (Recorded)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Destroying voted hash
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: process_remote_stonith_exec: Marking call to reboot for cluster2.verolengo.privatelan on behalf of stonith_admin.cman.8331@431c73e1-feab-4f4a-b34d-5da097144e67.cluster1: Timer expired (-62)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 23
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: error: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.8331@cluster1.verolengo.privatelan.431c73e1: Timer expired
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_te_control: Registering TE UUID: e00f9cce-2413-4314-aa71-d67a9a71ebc8
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_notify reply from cluster1.verolengo.privatelan: OK (0)
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_query reply 0 from cluster1.verolengo.privatelan ( 0)
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: process_remote_stonith_query: Query result 1 of 2 from cluster1.verolengo.privatelan (2 devices)
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_query reply from cluster1.verolengo.privatelan: OK (0)
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 23
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for crmd (a8651380-7c7e-4b12-b48a-f07f70ea8b06): on
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: set_graph_functions: Setting custom graph functions
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_te_control: Transitioner is now active
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: info: crm_client_new: Connecting 0x1167ad0 for uid=0 gid=0 pid=16343 id=93e77802-a7e6-4839-aa81-477d6cfd0bce
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: handle_new_connection: IPC credentials authenticated (16342-16343-6)
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_ipcs_shm_connect: connecting to client [16343]
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_ipcs_dispatch_connection_request: HUP conn (16339-8331-12)
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16339-8331-12) state:2
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: crm_client_destroy: Destroying 0 events
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-response-16339-8331-12-header
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-event-16339-8331-12-header
- Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-request-16339-8331-12-header
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=51
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_takeover: Taking over DC status for this partition
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_readwrite: We are now in R/W mode
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/40, version=0.23.19)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/41, version=0.23.19)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/42, version=0.23.19)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-14.el6_5.3-368c726"/>
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/43, version=0.23.19)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/44, version=0.23.19)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman"/>
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: initialize_join: join-1: Initializing join data (flag=true)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: join_make_offer: Making join offers based on membership 12
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: join_make_offer: join-1: Sending offer to cluster1.verolengo.privatelan
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: join_make_offer: Node cluster1.verolengo.privatelan[1] - join-1 phase 0 -> 1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_OFFER: join-1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was not terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: Timer expired (ref=431c73e1-feab-4f4a-b34d-5da097144e67) by client stonith_admin.cman.8331
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Set DC to cluster1.verolengo.privatelan (3.0.7)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/45, version=0.23.19)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/46, version=0.23.19)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/47, version=0.23.19)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Call 46 : Parsing CIB options
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Checking for expired actions every 900000ms
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Respond to join offer join-1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Acknowledging cluster1.verolengo.privatelan as our DC
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: Processing req from cluster1.verolengo.privatelan
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: join-1: Welcoming node cluster1.verolengo.privatelan (ref join_request-crmd-1404329555-36)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: do_dc_join_filter_offer: Node cluster1.verolengo.privatelan[1] - join-1 phase 1 -> 2
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node cluster1.verolengo.privatelan[1] - expected state is now member
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: join-1: Integration of 1 peers complete: do_dc_join_filter_offer
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_state_transition: All 1 cluster nodes responded to the join offer.
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=55
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_finalize: Finializing join-1 for 1 clients
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crmd_join_phase_log: join-1: cluster2.verolengo.privatelan=none
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crmd_join_phase_log: join-1: cluster1.verolengo.privatelan=integrated
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_finalize: Requested version <generation_tuple admin_epoch="0" cib-last-written="Wed Jul 2 21:29:37 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="23" have-quorum="1" num_updates="19" update-client="crm_resource" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2"/>
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: sync_our_cib: Syncing CIB to all peers
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/48, version=0.23.19)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: join-1: Still waiting on 1 integrated nodes
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crmd_join_phase_log: join-1: cluster2.verolengo.privatelan=none
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crmd_join_phase_log: join-1: cluster1.verolengo.privatelan=integrated
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: finalize_sync_callback: Notifying 1 clients of join-1 results
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: finalize_join_for: join-1: ACK'ing join request from cluster1.verolengo.privatelan
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: finalize_join_for: Node cluster1.verolengo.privatelan[1] - join-1 phase 2 -> 3
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_RESULT: join-1
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVM after monitor op complete (interval=0)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVMDisk after start op complete (interval=0)
- Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/49, version=0.23.19)
- Jul 02 21:32:35 corosync [CONFDB] lib_init_fn: conn=0x13c4f20
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 43
- Jul 02 21:32:35 corosync [CONFDB] exit_fn for conn=0x13c4f20
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
- Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
- Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 14431 exited with rc=0
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVMDisk after promote op Timed Out (interval=0)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster1 after monitor op complete (interval=0)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after start op complete (interval=0)
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
- Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 9
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 9
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 16
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000009 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 92
- Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
- Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 92
- Jul 02 21:32:35 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
- Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
- Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 576
- Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
- Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement