Advertisement
Guest User

DRBD a/p on Pacemaker+CMAN cluster unexpected STONITH

a guest
Jul 3rd, 2014
359
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 213.31 KB | None | 0 0
  1. On cluster1:
  2.  
  3. /var/log/messages:
  4.  
  5. Jul 2 21:23:43 cluster1 cibadmin[26066]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin -c -R --xml-text <constraints><rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/></constraints>
  6. Jul 2 21:23:48 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  7. Jul 2 21:23:57 cluster1 cibadmin[26319]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin -o constraints -R --xml-text <constraints>#012 <rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/>#012<rsc_order first="FirewallVMDiskClone" first-action="promote" id="order-FirewallVMDiskClone-FirewallVM-mandatory" then="FirewallVM" then-action="start"/></constraints>
  8. Jul 2 21:24:09 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  9. Jul 2 21:24:20 cluster1 cibadmin[27540]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace -o configuration -V -X <cib admin_epoch="0" cib-last-written="Wed Jul 2 19:07:10 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="19" have-quorum="1" num_updates="1" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">#012 <configuration>#012 <crm_config>#012 <cluster_property_set id="cib-bootstrap-options">#012 <nvpair id="cib-bootstrap-options-dc-version" name="d
  10. Jul 2 21:24:30 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  11. Jul 2 21:24:51 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  12. Jul 2 21:25:12 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  13. Jul 2 21:25:33 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  14. Jul 2 21:25:54 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  15. Jul 2 21:26:15 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  16. Jul 2 21:26:36 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  17. Jul 2 21:26:57 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  18. Jul 2 21:27:16 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  19. Jul 2 21:27:18 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  20. Jul 2 21:27:21 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
  21. Jul 2 21:27:26 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  22. Jul 2 21:27:31 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
  23. Jul 2 21:27:36 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  24. Jul 2 21:27:39 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  25. Jul 2 21:27:41 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
  26. Jul 2 21:27:46 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  27. Jul 2 21:27:51 cluster1 rsyslogd-2177: imuxsock lost 2 messages from pid 14829 due to rate-limiting
  28. Jul 2 21:27:59 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  29. Jul 2 21:28:21 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  30. Jul 2 21:28:31 cluster1 kernel: ACPI Error: SMBus or IPMI write requires Buffer of length 42, found length 20 (20090903/exfield-286)
  31. Jul 2 21:28:31 cluster1 kernel: ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PMI0._PMM] (Node ffff88031ad476c8), AE_AML_BUFFER_LIMIT
  32. Jul 2 21:28:31 cluster1 kernel: ACPI Exception: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20090903/power_meter-341)
  33. Jul 2 21:28:42 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  34. Jul 2 21:29:03 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  35. Jul 2 21:29:16 cluster1 cibadmin[6209]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace --xml-file firewall_cfg
  36. Jul 2 21:29:24 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  37. Jul 2 21:29:34 cluster1 cibadmin[7437]: notice: crm_log_args: Invoked: /usr/sbin/cibadmin --replace --xml-file firewall_cfg
  38. Jul 2 21:29:34 cluster1 crmd[16343]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  39. Jul 2 21:29:34 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.14.5 -> 0.21.1 from cluster2.verolengo.privatelan not applied to 0.14.5: Failed application of an update diff
  40. Jul 2 21:29:34 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.1 -> 0.21.2 (sync in progress)
  41. Jul 2 21:29:34 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.14.5 -> 0.21.2 from local not applied to 0.14.5: Failed application of an update diff
  42. Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  43. Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
  44. Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
  45. Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
  46. Jul 2 21:29:34 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
  47. Jul 2 21:29:35 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.21.2 -> 0.21.3 from cluster2.verolengo.privatelan not applied to 0.21.2: Failed application of an update diff
  48. Jul 2 21:29:35 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.3 -> 0.21.4 (sync in progress)
  49. Jul 2 21:29:35 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.21.2 -> 0.21.4 from local not applied to 0.21.2: Failed application of an update diff
  50. Jul 2 21:29:35 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  51. Jul 2 21:29:36 cluster1 attrd[16341]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  52. Jul 2 21:29:36 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  53. Jul 2 21:29:36 cluster1 crmd[16343]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  54. Jul 2 21:29:36 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.21.4 -> 0.21.5 from cluster2.verolengo.privatelan not applied to 0.21.4: Failed application of an update diff
  55. Jul 2 21:29:36 cluster1 cib[16338]: notice: cib_server_process_diff: Not applying diff 0.21.5 -> 0.21.6 (sync in progress)
  56. Jul 2 21:29:36 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.21.4 -> 0.21.6 from local not applied to 0.21.4: Failed application of an update diff
  57. Jul 2 21:29:36 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  58. Jul 2 21:29:36 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  59. Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_monitor_0 (call=59, rc=7, cib-update=36, confirmed=true) not running
  60. Jul 2 21:29:38 cluster1 crm_resource[7519]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  61. Jul 2 21:29:38 cluster1 crm_resource[7521]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  62. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
  63. Jul 2 21:29:38 cluster1 crm_resource[7529]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  64. Jul 2 21:29:38 cluster1 crm_resource[7531]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  65. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
  66. Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVM_monitor_0 (call=63, rc=7, cib-update=37, confirmed=true) not running
  67. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
  68. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
  69. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: unpack_config: On loss of CCM Quorum: Ignore
  70. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
  71. Jul 2 21:29:38 cluster1 kernel: drbd: events: mcg drbd: 2
  72. Jul 2 21:29:38 cluster1 kernel: drbd: initialized. Version: 8.4.5 (api:1/proto:86-101)
  73. Jul 2 21:29:38 cluster1 kernel: drbd: GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by manutenzione@buildhost.local, 2014-06-23 02:06:26
  74. Jul 2 21:29:38 cluster1 kernel: drbd: registered as block device major 147
  75. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
  76. Jul 2 21:29:38 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
  77. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Starting worker thread (from drbdsetup-84 [7653])
  78. Jul 2 21:29:38 cluster1 kernel: block drbd0: disk( Diskless -> Attaching )
  79. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Method to ensure write ordering: drain
  80. Jul 2 21:29:38 cluster1 kernel: block drbd0: max BIO size = 1048576
  81. Jul 2 21:29:38 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 104854328
  82. Jul 2 21:29:38 cluster1 kernel: block drbd0: resync bitmap: bits=13106791 words=204794 pages=400
  83. Jul 2 21:29:38 cluster1 kernel: block drbd0: size = 50 GB (52427164 KB)
  84. Jul 2 21:29:38 cluster1 kernel: block drbd0: recounting of set bits took additional 3 jiffies
  85. Jul 2 21:29:38 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  86. Jul 2 21:29:38 cluster1 kernel: block drbd0: disk( Attaching -> Consistent )
  87. Jul 2 21:29:38 cluster1 kernel: block drbd0: attached to UUIDs C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED00:0000000000000004
  88. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: conn( StandAlone -> Unconnected )
  89. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: Starting receiver thread (from drbd_w_firewall [7654])
  90. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: receiver (re)started
  91. Jul 2 21:29:38 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
  92. Jul 2 21:29:38 cluster1 crm_node[7692]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  93. Jul 2 21:29:38 cluster1 crm_attribute[7693]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  94. Jul 2 21:29:38 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (5)
  95. Jul 2 21:29:38 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 177: master-FirewallVMDisk=5
  96. Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_start_0 (call=67, rc=0, cib-update=38, confirmed=true) ok
  97. Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=70, rc=0, cib-update=0, confirmed=true) ok
  98. Jul 2 21:29:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=73, rc=0, cib-update=0, confirmed=true) ok
  99. Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Handshake successful: Agreed network protocol version 101
  100. Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Agreed to support TRIM on protocol level
  101. Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> WFReportParams )
  102. Jul 2 21:29:39 cluster1 kernel: drbd firewall_vm: Starting asender thread (from drbd_r_firewall [7677])
  103. Jul 2 21:29:39 cluster1 kernel: block drbd0: drbd_sync_handshake:
  104. Jul 2 21:29:39 cluster1 kernel: block drbd0: self C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED00:0000000000000004 bits:0 flags:0
  105. Jul 2 21:29:39 cluster1 kernel: block drbd0: peer C8E866487C11E78E:0000000000000000:B67C3AAE2BFFED01:0000000000000004 bits:0 flags:0
  106. Jul 2 21:29:39 cluster1 kernel: block drbd0: uuid_compare()=0 by rule 40
  107. Jul 2 21:29:39 cluster1 kernel: block drbd0: peer( Unknown -> Secondary ) conn( WFReportParams -> Connected ) disk( Consistent -> UpToDate ) pdsk( DUnknown -> UpToDate )
  108. Jul 2 21:29:44 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  109. Jul 2 21:29:54 cluster1 crm_node[8043]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  110. Jul 2 21:29:54 cluster1 crm_attribute[8044]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  111. Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (10000)
  112. Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 183: master-FirewallVMDisk=10000
  113. Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 185: master-FirewallVMDisk=10000
  114. Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=76, rc=0, cib-update=0, confirmed=true) ok
  115. Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=79, rc=0, cib-update=0, confirmed=true) ok
  116. Jul 2 21:29:54 cluster1 crm_node[8104]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  117. Jul 2 21:29:54 cluster1 crm_attribute[8105]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  118. Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=82, rc=0, cib-update=0, confirmed=true) ok
  119. Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=85, rc=0, cib-update=0, confirmed=true) ok
  120. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: peer( Secondary -> Unknown ) conn( Connected -> TearDown ) pdsk( UpToDate -> DUnknown )
  121. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: asender terminated
  122. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Terminating drbd_a_firewall
  123. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Connection closed
  124. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: conn( TearDown -> Unconnected )
  125. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: receiver terminated
  126. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: Restarting receiver thread
  127. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: receiver (re)started
  128. Jul 2 21:29:54 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
  129. Jul 2 21:29:54 cluster1 cib[16338]: warning: cib_process_diff: Diff 0.23.14 -> 0.23.15 from cluster2.verolengo.privatelan not applied to 0.23.14: Failed application of an update diff
  130. Jul 2 21:29:54 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.14 -> 0.23.15 from local not applied to 0.23.14: Failed application of an update diff
  131. Jul 2 21:29:54 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  132. Jul 2 21:29:54 cluster1 crm_node[8166]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  133. Jul 2 21:29:54 cluster1 crm_attribute[8167]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  134. Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (1000)
  135. Jul 2 21:29:54 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 201: master-FirewallVMDisk=1000
  136. Jul 2 21:29:54 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=88, rc=0, cib-update=0, confirmed=true) ok
  137. Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=91, rc=0, cib-update=0, confirmed=true) ok
  138. Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=94, rc=0, cib-update=0, confirmed=true) ok
  139. Jul 2 21:29:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=97, rc=0, cib-update=0, confirmed=true) ok
  140. Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm
  141. Jul 2 21:29:55 cluster1 rhcs_fence: Attempting to fence peer using RHCS from DRBD...
  142. Jul 2 21:29:55 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  143. Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: Handshake successful: Agreed network protocol version 101
  144. Jul 2 21:29:55 cluster1 kernel: drbd firewall_vm: Agreed to support TRIM on protocol level
  145. Jul 2 21:29:55 cluster1 fence_pcmk[8330]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
  146. Jul 2 21:29:55 cluster1 stonith_admin[8331]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
  147. Jul 2 21:29:55 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.8331.09c220bd wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
  148. Jul 2 21:29:55 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: 431c73e1-feab-4f4a-b34d-5da097144e67 (0)
  149. Jul 2 21:29:58 cluster1 rsyslogd-2177: imuxsock lost 143 messages from pid 14829 due to rate-limiting
  150. Jul 2 21:30:04 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  151. Jul 2 21:30:15 cluster1 lrmd[16340]: warning: child_timeout_callback: FirewallVMDisk_promote_0 process (PID 8278) timed out
  152. Jul 2 21:30:15 cluster1 lrmd[16340]: warning: operation_finished: FirewallVMDisk_promote_0:8278 - timed out after 20000ms
  153. Jul 2 21:30:15 cluster1 crmd[16343]: error: process_lrm_event: LRM operation FirewallVMDisk_promote_0 (100) Timed Out (timeout=20000ms)
  154. Jul 2 21:30:25 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  155. Jul 2 21:30:45 cluster1 crmd[16343]: warning: cib_rsc_callback: Resource update 39 failed: (rc=-62) Timer expired
  156. Jul 2 21:30:46 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  157. Jul 2 21:31:07 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  158. Jul 2 21:31:26 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  159. Jul 2 21:31:35 cluster1 corosync[14829]: [TOTEM ] A processor failed, forming new configuration.
  160. Jul 2 21:31:47 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  161. Jul 2 21:32:09 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  162. Jul 2 21:32:30 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  163. Jul 2 21:32:35 cluster1 corosync[14829]: [QUORUM] Members[1]: 1
  164. Jul 2 21:32:35 cluster1 corosync[14829]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
  165. Jul 2 21:32:35 cluster1 kernel: dlm: closing connection to node 2
  166. Jul 2 21:32:35 cluster1 crmd[16343]: notice: crm_update_peer_state: cman_event_callback: Node cluster2.verolengo.privatelan[2] - state is now lost (was member)
  167. Jul 2 21:32:35 cluster1 crmd[16343]: warning: reap_dead_nodes: Our DC node (cluster2.verolengo.privatelan) left the cluster
  168. Jul 2 21:32:35 cluster1 corosync[14829]: [CPG ] chosen downlist: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
  169. Jul 2 21:32:35 cluster1 corosync[14829]: [MAIN ] Completed service synchronization, ready to provide service.
  170. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  171. Jul 2 21:32:35 cluster1 crmd[16343]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
  172. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  173. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  174. Jul 2 21:32:35 cluster1 stonith-ng[16339]: error: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.8331@cluster1.verolengo.privatelan.431c73e1: Timer expired
  175. Jul 2 21:32:35 cluster1 crmd[16343]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
  176. Jul 2 21:32:35 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  177. Jul 2 21:32:35 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was not terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: Timer expired (ref=431c73e1-feab-4f4a-b34d-5da097144e67) by client stonith_admin.cman.8331
  178. Jul 2 21:32:35 cluster1 fence_pcmk[8330]: Call to fence cluster2.verolengo.privatelan (reset) failed with rc=194
  179. Jul 2 21:32:35 cluster1 fence_node[8314]: fence cluster2.verolengo.privatelan failed
  180. Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm exit code 1 (0x100)
  181. Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: fence-peer helper broken, returned 1
  182. Jul 2 21:32:35 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm
  183. Jul 2 21:32:35 cluster1 rhcs_fence: Attempting to fence peer using RHCS from DRBD...
  184. Jul 2 21:32:35 cluster1 fence_pcmk[14463]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
  185. Jul 2 21:32:35 cluster1 stonith_admin[14464]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
  186. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.14464.c3d3c0ba wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
  187. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: f8cd3a1a-31eb-4f72-b4bc-e5d3ff2df420 (0)
  188. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  189. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  190. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  191. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  192. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  193. Jul 2 21:32:35 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  194. Jul 2 21:32:37 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.19 -> 0.23.20 from local not applied to 0.23.19: Failed application of an update diff
  195. Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  196. Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  197. Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (1000)
  198. Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  199. Jul 2 21:32:37 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  200. Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  201. Jul 2 21:32:37 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  202. Jul 2 21:32:38 cluster1 rsyslogd-2177: imuxsock lost 300 messages from pid 14829 due to rate-limiting
  203. Jul 2 21:32:38 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
  204. Jul 2 21:32:38 cluster1 pengine[16342]: warning: pe_fence_node: Node cluster2.verolengo.privatelan will be fenced because the node is no longer part of the cluster
  205. Jul 2 21:32:38 cluster1 pengine[16342]: warning: determine_online_status: Node cluster2.verolengo.privatelan is unclean
  206. Jul 2 21:32:38 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster2.verolengo.privatelan: unknown error (1)
  207. Jul 2 21:32:38 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:1 on cluster1.verolengo.privatelan: unknown error (1)
  208. Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action ilocluster1_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
  209. Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
  210. Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
  211. Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
  212. Jul 2 21:32:38 cluster1 pengine[16342]: warning: custom_action: Action FirewallVMDisk:0_stop_0 on cluster2.verolengo.privatelan is unrunnable (offline)
  213. Jul 2 21:32:38 cluster1 pengine[16342]: warning: stage6: Scheduling Node cluster2.verolengo.privatelan for STONITH
  214. Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Move ilocluster1#011(Started cluster2.verolengo.privatelan -> cluster1.verolengo.privatelan)
  215. Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Stop FirewallVMDisk:0#011(cluster2.verolengo.privatelan)
  216. Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Demote FirewallVMDisk:1#011(Master -> Slave cluster1.verolengo.privatelan)
  217. Jul 2 21:32:38 cluster1 pengine[16342]: notice: LogActions: Recover FirewallVMDisk:1#011(Master cluster1.verolengo.privatelan)
  218. Jul 2 21:32:38 cluster1 pengine[16342]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-0.bz2
  219. Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_fence_node: Executing off fencing operation (43) on cluster2.verolengo.privatelan (timeout=60000)
  220. Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 52: notify FirewallVMDisk_pre_notify_demote_0 on cluster1.verolengo.privatelan (local)
  221. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: handle_request: Client crmd.16343.9d15f38a wants to fence (off) 'cluster2.verolengo.privatelan' with device '(any)'
  222. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation off for cluster2.verolengo.privatelan: 01c5ed1f-c015-4109-879b-ac87bf1efe8c (0)
  223. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  224. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  225. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  226. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  227. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  228. Jul 2 21:32:38 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  229. Jul 2 21:32:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=103, rc=0, cib-update=0, confirmed=true) ok
  230. Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 15: demote FirewallVMDisk_demote_0 on cluster1.verolengo.privatelan (local)
  231. Jul 2 21:32:38 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_demote_0 (call=106, rc=0, cib-update=57, confirmed=true) ok
  232. Jul 2 21:32:38 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 53: notify FirewallVMDisk_post_notify_demote_0 on cluster1.verolengo.privatelan (local)
  233. Jul 2 21:32:39 cluster1 crm_node[14545]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  234. Jul 2 21:32:39 cluster1 crm_attribute[14546]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  235. Jul 2 21:32:39 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=109, rc=0, cib-update=0, confirmed=true) ok
  236. Jul 2 21:32:39 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 51: notify FirewallVMDisk_pre_notify_stop_0 on cluster1.verolengo.privatelan (local)
  237. Jul 2 21:32:39 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=112, rc=0, cib-update=0, confirmed=true) ok
  238. Jul 2 21:32:39 cluster1 stonith-ng[16339]: error: log_operation: Operation 'reboot' [14469] (call 2 from stonith_admin.cman.14464) for host 'cluster2.verolengo.privatelan' with device 'pdu1' returned: 1 (Operation not permitted). Trying: ilocluster2
  239. Jul 2 21:32:39 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14469 [ Parse error: Ignoring unknown option 'nodename=cluster2.verolengo.privatelan' ]
  240. Jul 2 21:32:39 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14469 [ Failed: Unable to obtain correct plug status or plug is not available ]
  241. Jul 2 21:32:42 cluster1 stonith-ng[16339]: error: log_operation: Operation 'off' [14581] (call 2 from crmd.16343) for host 'cluster2.verolengo.privatelan' with device 'pdu1' returned: 1 (Operation not permitted). Trying: ilocluster2
  242. Jul 2 21:32:42 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14581 [ Parse error: Ignoring unknown option 'nodename=cluster2.verolengo.privatelan' ]
  243. Jul 2 21:32:42 cluster1 stonith-ng[16339]: warning: log_operation: pdu1:14581 [ Failed: Unable to obtain correct plug status or plug is not available ]
  244. Jul 2 21:32:42 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  245. Jul 2 21:32:44 cluster1 rsyslogd-2177: imuxsock lost 42 messages from pid 14829 due to rate-limiting
  246. Jul 2 21:32:48 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  247. Jul 2 21:32:49 cluster1 kernel: netxen_nic: eth1 NIC Link is down
  248. Jul 2 21:32:49 cluster1 kernel: bonding: bond2: link status definitely down for interface eth1, disabling it
  249. Jul 2 21:32:50 cluster1 kernel: e1000e: eth3 NIC Link is Down
  250. Jul 2 21:32:50 cluster1 kernel: e1000e: eth4 NIC Link is Down
  251. Jul 2 21:32:50 cluster1 kernel: bonding: bond1: link status definitely down for interface eth3, disabling it
  252. Jul 2 21:32:50 cluster1 kernel: bonding: bond1: now running without any active interface !
  253. Jul 2 21:32:50 cluster1 kernel: bonding: bond1: link status definitely down for interface eth4, disabling it
  254. Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: log_operation: Operation 'reboot' [14578] (call 2 from stonith_admin.cman.14464) for host 'cluster2.verolengo.privatelan' with device 'ilocluster2' returned: 0 (OK)
  255. Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.14464@cluster1.verolengo.privatelan.f8cd3a1a: OK
  256. Jul 2 21:32:50 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: OK (ref=f8cd3a1a-31eb-4f72-b4bc-e5d3ff2df420) by client stonith_admin.cman.14464
  257. Jul 2 21:32:50 cluster1 crmd[16343]: notice: tengine_stonith_notify: Notified CMAN that 'cluster2.verolengo.privatelan' is now fenced
  258. Jul 2 21:32:50 cluster1 rsyslogd-2177: imuxsock lost 27 messages from pid 14829 due to rate-limiting
  259. Jul 2 21:32:50 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.25 -> 0.23.26 from local not applied to 0.23.25: Failed application of an update diff
  260. Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  261. Jul 2 21:32:50 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  262. Jul 2 21:32:50 cluster1 fence_node[14447]: fence cluster2.verolengo.privatelan success
  263. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: helper command: /sbin/drbdadm fence-peer firewall_vm exit code 7 (0x700)
  264. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: fence-peer helper returned 7 (peer was stonithed)
  265. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: pdsk( DUnknown -> Outdated )
  266. Jul 2 21:32:50 cluster1 kernel: block drbd0: role( Secondary -> Primary )
  267. Jul 2 21:32:50 cluster1 kernel: block drbd0: new current UUID 0BCECDF42C97B34F:C8E866487C11E78E:B67C3AAE2BFFED00:0000000000000004
  268. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> WFReportParams )
  269. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Starting asender thread (from drbd_r_firewall [7677])
  270. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: meta connection shut down by peer.
  271. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( WFReportParams -> NetworkFailure )
  272. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: asender terminated
  273. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Terminating drbd_a_firewall
  274. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Connection closed
  275. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( NetworkFailure -> Unconnected )
  276. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: receiver terminated
  277. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: Restarting receiver thread
  278. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: receiver (re)started
  279. Jul 2 21:32:50 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
  280. Jul 2 21:32:50 cluster1 kernel: netxen_nic: eth0 NIC Link is down
  281. Jul 2 21:32:50 cluster1 snmp-ups[3334]: dstate_setflags: base variable (battery.runtime.low) is immutable
  282. Jul 2 21:32:51 cluster1 kernel: bonding: bond2: link status definitely down for interface eth0, disabling it
  283. Jul 2 21:32:51 cluster1 kernel: bonding: bond2: now running without any active interface !
  284. Jul 2 21:32:53 cluster1 cmanicd: Entering iml_log_link_down(slot: 5, port: 2)
  285. Jul 2 21:32:53 cluster1 cmanicd: Entering log_iml_event(slot: 5, port: 2, code: (Down,2))
  286. Jul 2 21:32:53 cluster1 cmanicd: Entering get_event_id(slot: 5, port: 2
  287. Jul 2 21:32:53 cluster1 cmanicd: Existing event id(29) found for the slot and port.
  288. Jul 2 21:32:53 cluster1 cmanicd: Entering read_iml_event(slot: 5, port: 2, eventid: 29)
  289. Jul 2 21:32:53 cluster1 cmanicd: Calling ioctl() to read event id: 29)
  290. Jul 2 21:32:53 cluster1 cmanicd: Successfully read the event id: 29)
  291. Jul 2 21:32:53 cluster1 cmanicd: Trying to modify the existing IML Event.
  292. Jul 2 21:32:53 cluster1 cmanicd: Successfully updated the existing IML Event.
  293. Jul 2 21:32:53 cluster1 cmanicd: Returning from log_iml_event().
  294. Jul 2 21:32:54 cluster1 cmanicd: Entering iml_log_link_down(slot: 6, port: 2)
  295. Jul 2 21:32:54 cluster1 cmanicd: Entering log_iml_event(slot: 6, port: 2, code: (Down,2))
  296. Jul 2 21:32:54 cluster1 cmanicd: Entering get_event_id(slot: 6, port: 2
  297. Jul 2 21:32:54 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  298. Jul 2 21:32:54 cluster1 cmanicd: Existing event id(27) found for the slot and port.
  299. Jul 2 21:32:54 cluster1 cmanicd: Entering read_iml_event(slot: 6, port: 2, eventid: 27)
  300. Jul 2 21:32:54 cluster1 cmanicd: Calling ioctl() to read event id: 27)
  301. Jul 2 21:32:54 cluster1 cmanicd: Successfully read the event id: 27)
  302. Jul 2 21:32:54 cluster1 cmanicd: Trying to modify the existing IML Event.
  303. Jul 2 21:32:54 cluster1 cmanicd: Successfully updated the existing IML Event.
  304. Jul 2 21:32:54 cluster1 cmanicd: Returning from log_iml_event().
  305. Jul 2 21:32:54 cluster1 stonith-ng[16339]: notice: log_operation: Operation 'off' [15566] (call 2 from crmd.16343) for host 'cluster2.verolengo.privatelan' with device 'ilocluster2' returned: 0 (OK)
  306. Jul 2 21:32:54 cluster1 stonith-ng[16339]: notice: remote_op_done: Operation off of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for crmd.16343@cluster1.verolengo.privatelan.01c5ed1f: OK
  307. Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_callback: Stonith operation 2/43:0:0:e00f9cce-2413-4314-aa71-d67a9a71ebc8: OK (0)
  308. Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was terminated (off) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: OK (ref=01c5ed1f-c015-4109-879b-ac87bf1efe8c) by client crmd.16343
  309. Jul 2 21:32:55 cluster1 crmd[16343]: notice: tengine_stonith_notify: Notified CMAN that 'cluster2.verolengo.privatelan' is now fenced
  310. Jul 2 21:32:55 cluster1 crmd[16343]: notice: run_graph: Transition 0 (Complete=15, Pending=0, Fired=0, Skipped=13, Incomplete=7, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
  311. Jul 2 21:32:55 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
  312. Jul 2 21:32:55 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
  313. Jul 2 21:32:55 cluster1 pengine[16342]: notice: LogActions: Start ilocluster1#011(cluster1.verolengo.privatelan)
  314. Jul 2 21:32:55 cluster1 pengine[16342]: notice: LogActions: Recover FirewallVMDisk:0#011(Slave cluster1.verolengo.privatelan)
  315. Jul 2 21:32:55 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-0.bz2
  316. Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 9: start ilocluster1_start_0 on cluster1.verolengo.privatelan (local)
  317. Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 46: notify FirewallVMDisk_pre_notify_stop_0 on cluster1.verolengo.privatelan (local)
  318. Jul 2 21:32:55 cluster1 stonith-ng[16339]: notice: stonith_device_register: Device 'ilocluster1' already existed in device list (3 active devices)
  319. Jul 2 21:32:55 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=117, rc=0, cib-update=0, confirmed=true) ok
  320. Jul 2 21:32:55 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 1: stop FirewallVMDisk_stop_0 on cluster1.verolengo.privatelan (local)
  321. Jul 2 21:32:55 cluster1 drbd(FirewallVMDisk)[17894]: WARNING: firewall_vm still Primary, demoting.
  322. Jul 2 21:32:55 cluster1 kernel: block drbd0: role( Primary -> Secondary )
  323. Jul 2 21:32:55 cluster1 kernel: block drbd0: bitmap WRITE of 0 pages took 0 jiffies
  324. Jul 2 21:32:55 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  325. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: conn( WFConnection -> Disconnecting )
  326. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Discarding network configuration.
  327. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Connection closed
  328. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: conn( Disconnecting -> StandAlone )
  329. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: receiver terminated
  330. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Terminating drbd_r_firewall
  331. Jul 2 21:32:55 cluster1 kernel: block drbd0: disk( UpToDate -> Failed )
  332. Jul 2 21:32:55 cluster1 kernel: block drbd0: bitmap WRITE of 0 pages took 0 jiffies
  333. Jul 2 21:32:55 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  334. Jul 2 21:32:55 cluster1 kernel: block drbd0: disk( Failed -> Diskless )
  335. Jul 2 21:32:55 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 0
  336. Jul 2 21:32:55 cluster1 kernel: drbd firewall_vm: Terminating drbd_w_firewall
  337. Jul 2 21:32:55 cluster1 udevd-work[3677]: error opening ATTR{/sys/devices/virtual/block/drbd0/queue/iosched/slice_idle} for writing: No such file or directory
  338. Jul 2 21:32:55 cluster1 udevd-work[3677]: error opening ATTR{/sys/devices/virtual/block/drbd0/queue/iosched/quantum} for writing: No such file or directory
  339. Jul 2 21:32:56 cluster1 cmanicd: Entering iml_log_link_down(slot: 6, port: 3)
  340. Jul 2 21:32:56 cluster1 cmanicd: Entering log_iml_event(slot: 6, port: 3, code: (Down,2))
  341. Jul 2 21:32:56 cluster1 cmanicd: Entering get_event_id(slot: 6, port: 3
  342. Jul 2 21:32:56 cluster1 crm_node[18699]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  343. Jul 2 21:32:56 cluster1 rsyslogd-2177: imuxsock lost 102 messages from pid 14829 due to rate-limiting
  344. Jul 2 21:32:56 cluster1 crm_attribute[18700]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  345. Jul 2 21:32:56 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (<null>)
  346. Jul 2 21:32:56 cluster1 attrd[16341]: notice: attrd_perform_update: Sent delete 209: node=cluster1.verolengo.privatelan, attr=master-FirewallVMDisk, id=<n/a>, set=(null), section=status
  347. Jul 2 21:32:56 cluster1 stonith-ng[16339]: warning: cib_process_diff: Diff 0.23.27 -> 0.23.28 from local not applied to 0.23.27: Failed application of an update diff
  348. Jul 2 21:32:56 cluster1 stonith-ng[16339]: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  349. Jul 2 21:32:56 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_stop_0 (call=120, rc=0, cib-update=68, confirmed=true) ok
  350. Jul 2 21:32:56 cluster1 cmanicd: Existing event id(28) found for the slot and port.
  351. Jul 2 21:32:56 cluster1 cmanicd: Entering read_iml_event(slot: 6, port: 3, eventid: 28)
  352. Jul 2 21:32:56 cluster1 cmanicd: Calling ioctl() to read event id: 28)
  353. Jul 2 21:32:56 cluster1 cmanicd: Successfully read the event id: 28)
  354. Jul 2 21:32:56 cluster1 cmanicd: Trying to modify the existing IML Event.
  355. Jul 2 21:32:56 cluster1 cmanicd: Successfully updated the existing IML Event.
  356. Jul 2 21:32:56 cluster1 cmanicd: Returning from log_iml_event().
  357. Jul 2 21:33:00 cluster1 cmanicd: Entering iml_log_link_down(slot: 5, port: 1)
  358. Jul 2 21:33:00 cluster1 cmanicd: Entering log_iml_event(slot: 5, port: 1, code: (Down,2))
  359. Jul 2 21:33:00 cluster1 cmanicd: Entering get_event_id(slot: 5, port: 1
  360. Jul 2 21:33:00 cluster1 cmanicd: Existing event id(30) found for the slot and port.
  361. Jul 2 21:33:00 cluster1 cmanicd: Entering read_iml_event(slot: 5, port: 1, eventid: 30)
  362. Jul 2 21:33:00 cluster1 cmanicd: Calling ioctl() to read event id: 30)
  363. Jul 2 21:33:00 cluster1 cmanicd: Successfully read the event id: 30)
  364. Jul 2 21:33:00 cluster1 cmanicd: Trying to modify the existing IML Event.
  365. Jul 2 21:33:00 cluster1 cmanicd: Successfully updated the existing IML Event.
  366. Jul 2 21:33:00 cluster1 cmanicd: Returning from log_iml_event().
  367. Jul 2 21:33:00 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  368. Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation ilocluster1_start_0 (call=115, rc=0, cib-update=69, confirmed=true) ok
  369. Jul 2 21:33:01 cluster1 crmd[16343]: notice: run_graph: Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=7, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Stopped
  370. Jul 2 21:33:01 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
  371. Jul 2 21:33:01 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
  372. Jul 2 21:33:01 cluster1 pengine[16342]: notice: LogActions: Start FirewallVMDisk:0#011(cluster1.verolengo.privatelan)
  373. Jul 2 21:33:01 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-1.bz2
  374. Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 10: monitor ilocluster1_monitor_60000 on cluster1.verolengo.privatelan (local)
  375. Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 13: start FirewallVMDisk_start_0 on cluster1.verolengo.privatelan (local)
  376. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Starting worker thread (from drbdsetup-84 [19730])
  377. Jul 2 21:33:01 cluster1 kernel: block drbd0: disk( Diskless -> Attaching )
  378. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Method to ensure write ordering: drain
  379. Jul 2 21:33:01 cluster1 kernel: block drbd0: max BIO size = 1048576
  380. Jul 2 21:33:01 cluster1 kernel: block drbd0: drbd_bm_resize called with capacity == 104854328
  381. Jul 2 21:33:01 cluster1 kernel: block drbd0: resync bitmap: bits=13106791 words=204794 pages=400
  382. Jul 2 21:33:01 cluster1 kernel: block drbd0: size = 50 GB (52427164 KB)
  383. Jul 2 21:33:01 cluster1 kernel: block drbd0: recounting of set bits took additional 3 jiffies
  384. Jul 2 21:33:01 cluster1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  385. Jul 2 21:33:01 cluster1 kernel: block drbd0: disk( Attaching -> UpToDate ) pdsk( DUnknown -> Outdated )
  386. Jul 2 21:33:01 cluster1 kernel: block drbd0: attached to UUIDs 0BCECDF42C97B34F:C8E866487C11E78E:B67C3AAE2BFFED00:0000000000000004
  387. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: conn( StandAlone -> Unconnected )
  388. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: Starting receiver thread (from drbd_w_firewall [19732])
  389. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: receiver (re)started
  390. Jul 2 21:33:01 cluster1 kernel: drbd firewall_vm: conn( Unconnected -> WFConnection )
  391. Jul 2 21:33:01 cluster1 crm_node[19771]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  392. Jul 2 21:33:01 cluster1 crm_attribute[19772]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  393. Jul 2 21:33:01 cluster1 attrd[16341]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-FirewallVMDisk (10000)
  394. Jul 2 21:33:01 cluster1 attrd[16341]: notice: attrd_perform_update: Sent update 213: master-FirewallVMDisk=10000
  395. Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_start_0 (call=126, rc=0, cib-update=71, confirmed=true) ok
  396. Jul 2 21:33:01 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 45: notify FirewallVMDisk_post_notify_start_0 on cluster1.verolengo.privatelan (local)
  397. Jul 2 21:33:01 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=129, rc=0, cib-update=0, confirmed=true) ok
  398. Jul 2 21:33:02 cluster1 rsyslogd-2177: imuxsock lost 57 messages from pid 14829 due to rate-limiting
  399. Jul 2 21:33:05 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation ilocluster1_monitor_60000 (call=124, rc=0, cib-update=72, confirmed=false) ok
  400. Jul 2 21:33:05 cluster1 crmd[16343]: notice: run_graph: Transition 2 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Stopped
  401. Jul 2 21:33:05 cluster1 pengine[16342]: notice: unpack_config: On loss of CCM Quorum: Ignore
  402. Jul 2 21:33:05 cluster1 pengine[16342]: warning: unpack_rsc_op: Processing failed op promote for FirewallVMDisk:0 on cluster1.verolengo.privatelan: unknown error (1)
  403. Jul 2 21:33:05 cluster1 fenced[14883]: fencing node cluster2.verolengo.privatelan
  404. Jul 2 21:33:05 cluster1 pengine[16342]: notice: LogActions: Promote FirewallVMDisk:0#011(Slave -> Master cluster1.verolengo.privatelan)
  405. Jul 2 21:33:05 cluster1 pengine[16342]: notice: LogActions: Start FirewallVM#011(cluster1.verolengo.privatelan)
  406. Jul 2 21:33:05 cluster1 pengine[16342]: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-2.bz2
  407. Jul 2 21:33:05 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 51: notify FirewallVMDisk_pre_notify_promote_0 on cluster1.verolengo.privatelan (local)
  408. Jul 2 21:33:05 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=133, rc=0, cib-update=0, confirmed=true) ok
  409. Jul 2 21:33:05 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 15: promote FirewallVMDisk_promote_0 on cluster1.verolengo.privatelan (local)
  410. Jul 2 21:33:05 cluster1 fence_pcmk[19867]: Requesting Pacemaker fence cluster2.verolengo.privatelan (reset)
  411. Jul 2 21:33:05 cluster1 stonith_admin[19882]: notice: crm_log_args: Invoked: stonith_admin --reboot cluster2.verolengo.privatelan --tolerance 5s --tag cman
  412. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: handle_request: Client stonith_admin.cman.19882.f3cc2071 wants to fence (reboot) 'cluster2.verolengo.privatelan' with device '(any)'
  413. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for cluster2.verolengo.privatelan: 33333650-fe63-4c6e-9752-9436019a3f34 (0)
  414. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  415. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  416. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  417. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  418. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  419. Jul 2 21:33:05 cluster1 stonith-ng[16339]: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  420. Jul 2 21:33:05 cluster1 kernel: block drbd0: role( Secondary -> Primary )
  421. Jul 2 21:33:05 cluster1 rsyslogd-2177: imuxsock begins to drop messages from pid 14829 due to rate-limiting
  422. Jul 2 21:33:06 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_promote_0 (call=136, rc=0, cib-update=74, confirmed=true) ok
  423. Jul 2 21:33:06 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 52: notify FirewallVMDisk_post_notify_promote_0 on cluster1.verolengo.privatelan (local)
  424. Jul 2 21:33:06 cluster1 crm_node[19943]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  425. Jul 2 21:33:06 cluster1 crm_attribute[19944]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log
  426. Jul 2 21:33:06 cluster1 crmd[16343]: notice: process_lrm_event: LRM operation FirewallVMDisk_notify_0 (call=139, rc=0, cib-update=0, confirmed=true) ok
  427. Jul 2 21:33:06 cluster1 crmd[16343]: notice: te_rsc_command: Initiating action 40: start FirewallVM_start_0 on cluster1.verolengo.privatelan (local)
  428. Jul 2 21:33:06 cluster1 VirtualDomain(FirewallVM)[19953]: INFO: Domain name "firewall" saved to /var/run/resource-agents/VirtualDomain-FirewallVM.state.
  429. Jul 2 21:33:06 cluster1 auditd[1791]: Audit daemon rotating log files
  430. Jul 2 21:33:06 cluster1 kernel: device vnet0 entered promiscuous mode
  431. Jul 2 21:33:06 cluster1 kernel: lan: port 2(vnet0) entering forwarding state
  432.  
  433. /var/log/cluster/fenced.log:
  434.  
  435. Jul 02 21:32:35 fenced cluster node 2 removed seq 12
  436. Jul 02 21:32:35 fenced fenced:daemon conf 1 0 1 memb 1 join left 2
  437. Jul 02 21:32:35 fenced fenced:daemon ring 1:12 1 memb 1
  438. Jul 02 21:32:35 fenced fenced:default conf 1 0 1 memb 1 join left 2
  439. Jul 02 21:32:35 fenced add_change cg 3 remove nodeid 2 reason 3
  440. Jul 02 21:32:35 fenced add_change cg 3 m 1 j 0 r 1 f 1
  441. Jul 02 21:32:35 fenced add_victims node 2
  442. Jul 02 21:32:35 fenced check_ringid cluster 12 cpg 1:8
  443. Jul 02 21:32:35 fenced fenced:default ring 1:12 1 memb 1
  444. Jul 02 21:32:35 fenced check_ringid done cluster 12 cpg 1:12
  445. Jul 02 21:32:35 fenced check_quorum done
  446. Jul 02 21:32:35 fenced send_start 1:3 flags 2 started 2 m 1 j 0 r 1 f 1
  447. Jul 02 21:32:35 fenced receive_start 1:3 len 152
  448. Jul 02 21:32:35 fenced match_change 1:3 matches cg 3
  449. Jul 02 21:32:35 fenced wait_messages cg 3 got all 1
  450. Jul 02 21:32:35 fenced set_master from 1 to complete node 1
  451. Jul 02 21:32:35 fenced delay post_fail_delay 30 quorate_from_last_update 0
  452. Jul 02 21:33:05 fenced delay of 30s leaves 1 victims
  453. Jul 02 21:33:05 fenced cluster2.verolengo.privatelan not a cluster member after 30 sec post_fail_delay
  454. Jul 02 21:33:05 fenced fencing node cluster2.verolengo.privatelan
  455. Jul 02 21:33:10 fenced fence cluster2.verolengo.privatelan dev 0.0 agent fence_pcmk result: success
  456. Jul 02 21:33:10 fenced fence cluster2.verolengo.privatelan success
  457. Jul 02 21:33:10 fenced send_victim_done cg 3 flags 2 victim nodeid 2
  458. Jul 02 21:33:10 fenced send_complete 1:3 flags 2 started 2 m 1 j 0 r 1 f 1
  459. Jul 02 21:33:10 fenced receive_victim_done 1:3 flags 2 len 80
  460. Jul 02 21:33:10 fenced receive_victim_done 1:3 remove victim 2 time 1404329590 how 1
  461. Jul 02 21:33:10 fenced receive_complete 1:3 len 152
  462. Jul 02 21:33:10 fenced client connection 3 fd 18
  463. Jul 02 21:33:10 fenced client connection 5 fd 19
  464. Jul 02 21:33:10 fenced send_external victim nodeid 2
  465. Jul 02 21:33:10 fenced client connection 3 fd 18
  466. Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
  467. Jul 02 21:33:10 fenced send_external victim nodeid 2
  468. Jul 02 21:33:10 fenced client connection 5 fd 19
  469. Jul 02 21:33:10 fenced send_external victim nodeid 2
  470. Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
  471. Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
  472. Jul 02 21:33:10 fenced send_external victim nodeid 2
  473. Jul 02 21:33:10 fenced receive_external from 1 len 40 victim nodeid 2
  474. Jul 02 21:54:43 fenced cluster node 2 added seq 16
  475. Jul 02 21:54:43 fenced fenced:daemon conf 2 1 0 memb 1 2 join 2 left
  476. Jul 02 21:54:43 fenced cpg_mcast_joined retried 1 protocol
  477. Jul 02 21:54:43 fenced fenced:daemon ring 1:16 2 memb 1 2
  478. Jul 02 21:54:43 fenced fenced:default ring 1:16 2 memb 1 2
  479. Jul 02 21:54:43 fenced receive_protocol from 2 max 1.1.1.0 run 1.1.1.0
  480. Jul 02 21:54:43 fenced daemon node 2 max 0.0.0.0 run 0.0.0.0
  481. Jul 02 21:54:43 fenced daemon node 2 join 1404330883 left 1404329555 local quoru
  482. m 1404318281
  483. Jul 02 21:54:43 fenced receive_protocol from 1 max 1.1.1.0 run 1.1.1.1
  484. Jul 02 21:54:43 fenced daemon node 1 max 1.1.1.0 run 1.1.1.0
  485. Jul 02 21:54:43 fenced daemon node 1 join 1404318281 left 0 local quorum 1404318
  486. 281
  487. Jul 02 21:54:43 fenced fenced:default conf 2 1 0 memb 1 2 join 2 left
  488. Jul 02 21:54:43 fenced add_change cg 4 joined nodeid 2
  489. Jul 02 21:54:43 fenced add_change cg 4 m 2 j 1 r 0 f 0
  490. Jul 02 21:54:43 fenced check_ringid done cluster 16 cpg 1:16
  491. Jul 02 21:54:43 fenced check_quorum done
  492. Jul 02 21:54:43 fenced send_start 1:4 flags 2 started 3 m 2 j 1 r 0 f 0
  493. Jul 02 21:54:43 fenced receive_start 2:1 len 152
  494. Jul 02 21:54:43 fenced match_change 2:1 matches cg 4
  495. Jul 02 21:54:43 fenced wait_messages cg 4 need 1 of 2
  496. Jul 02 21:54:43 fenced receive_start 1:4 len 152
  497. Jul 02 21:54:43 fenced match_change 1:4 matches cg 4
  498. Jul 02 21:54:43 fenced wait_messages cg 4 got all 2
  499. Jul 02 21:54:43 fenced set_master from 1 to complete node 1
  500. Jul 02 21:54:43 fenced send_complete 1:4 flags 2 started 3 m 2 j 1 r 0 f 0
  501. Jul 02 21:54:43 fenced receive_complete 1:4 len 152
  502.  
  503. /var/log/cluster/corosync.log:
  504.  
  505. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7437 id=7f2061a5-3caa-4744-a9b0-fc2533bee520
  506. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7437-14)
  507. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7437]
  508. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  509. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  510. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  511. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_replace operation for section 'all' to master (origin=local/cibadmin/2)
  512. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_uptime: Current CPU usage is: 0s, 217966us
  513. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_compare_age: Loose: 0.217966 vs 0.384941 (usec)
  514. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: do_election_count_vote: Election 7 (owner: cluster2.verolengo.privatelan) lost: vote from cluster2.verolengo.privatelan (Uptime)
  515. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Unset DC. Was cluster2.verolengo.privatelan
  516. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Ignore election check: we not in an election
  517. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_PENDING: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  518. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
  519. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  520. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Election Trigger (I_DC_TIMEOUT:20000ms), src=43
  521. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected b42d1e4a9ade0eb56291c5740d365ab5, calculated 35caf72644b4a1832f5814efb7073232
  522. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.14.5 -> 0.21.1 from cluster2.verolengo.privatelan not applied to 0.14.5: Failed application of an update diff
  523. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
  524. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/34, version=0.14.5)
  525. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7437-14)
  526. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7437-14) state:2
  527. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
  528. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7437-14-header
  529. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7437-14-header
  530. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7437-14-header
  531. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.1 -> 0.21.2 (sync in progress)
  532. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 05e4243a540acefb01413901a436766c
  533. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.14.5 with 0.21.2 from cluster2.verolengo.privatelan
  534. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
  535. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.14.5 -> 0.21.2 from cluster2.verolengo.privatelan
  536. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.2)
  537. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
  538. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
  539. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.14.5
  540. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.2 05e4243a540acefb01413901a436766c
  541. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib epoch="14" num_updates="5" admin_epoch="0">
  542. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  543. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_update_resource" id="cluster2.verolengo.privatelan"/>
  544. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_update_resource" id="cluster1.verolengo.privatelan"/>
  545. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  546. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  547. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:34 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="2" update-client="cluster1.verolengo.privatelan" update-origin="cluster2.verolengo.privatelan" validate-with="pacemaker-1.2">
  548. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <configuration>
  549. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <resources>
  550. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <master id="FirewallVMDiskClone">
  551. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <primitive class="ocf" id="FirewallVMDisk" provider="linbit" type="drbd">
  552. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <instance_attributes id="FirewallVMDisk-instance_attributes">
  553. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDisk-instance_attributes-drbd_resource" name="drbd_resource" value="firewall_vm"/>
  554. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </instance_attributes>
  555. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <operations>
  556. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <op id="FirewallVMDisk-monitor-interval-60s" interval="60s" name="monitor"/>
  557. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </operations>
  558. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </primitive>
  559. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <meta_attributes id="FirewallVMDiskClone-meta_attributes">
  560. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-master-max" name="master-max" value="1"/>
  561. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-master-node-max" name="master-node-max" value="1"/>
  562. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-clone-max" name="clone-max" value="2"/>
  563. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/>
  564. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-notify" name="notify" value="true"/>
  565. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-target-role" name="target-role" value="Started"/>
  566. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVMDiskClone-meta_attributes-is-managed" name="is-managed" value="true"/>
  567. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </meta_attributes>
  568. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </master>
  569. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <primitive class="ocf" id="FirewallVM" provider="heartbeat" type="VirtualDomain">
  570. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <instance_attributes id="FirewallVM-instance_attributes">
  571. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-config" name="config" value="/etc/libvirt/qemu/firewall.xml"/>
  572. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-migration_transport" name="migration_transport" value="tcp"/>
  573. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-migration_network_suffix" name="migration_network_suffix" value="-10g"/>
  574. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-instance_attributes-hypervisor" name="hypervisor" value="qemu:///system"/>
  575. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </instance_attributes>
  576. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <operations>
  577. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <op id="FirewallVM-monitor-interval-60s" interval="60s" name="monitor" timeout="120s"/>
  578. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </operations>
  579. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <meta_attributes id="FirewallVM-meta_attributes">
  580. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-allow-migrate" name="allow-migrate" value="false"/>
  581. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-target-role" name="target-role" value="Started"/>
  582. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-meta_attributes-is-managed" name="is-managed" value="true"/>
  583. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </meta_attributes>
  584. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </primitive>
  585. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </resources>
  586. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <constraints>
  587. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_colocation id="colocation-FirewallVM-FirewallVMDiskClone-INFINITY" rsc="FirewallVM" score="INFINITY" with-rsc="FirewallVMDiskClone" with-rsc-role="Master"/>
  588. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_order first="FirewallVMDiskClone" first-action="promote" id="order-FirewallVMDiskClone-FirewallVM-mandatory" then="FirewallVM" then-action="start"/>
  589. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rsc_location id="location-FirewallVM" rsc="FirewallVM">
  590. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <rule id="location-FirewallVM-rule" role="master" score="50">
  591. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <expression attribute="#uname" id="location-FirewallVM-rule-expr" operation="eq" value="cluster2.verolengo.privatelan"/>
  592. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </rule>
  593. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </rsc_location>
  594. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </constraints>
  595. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </configuration>
  596. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  597. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_cib_replaced" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
  598. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_cib_replaced" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
  599. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  600. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  601. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 05e4243a540acefb01413901a436766c, calculated 7ecb35e3adfb1dfe6e0b89973326c17d
  602. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.14.5 -> 0.21.2 from local not applied to 0.14.5: Failed application of an update diff
  603. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xc8f6a0
  604. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  605. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_OFFER: join-6
  606. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
  607. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Set DC to cluster2.verolengo.privatelan (3.0.7)
  608. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
  609. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/4, version=0.21.2)
  610. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/35, version=0.21.2)
  611. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Respond to join offer join-6
  612. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Acknowledging cluster2.verolengo.privatelan as our DC
  613. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
  614. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/146, version=0.21.2)
  615. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
  616. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
  617. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/147, version=0.21.2)
  618. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
  619. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
  620. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/148, version=0.21.2)
  621. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: update_cib_stonith_devices: Updating device list from the cib: new resource
  622. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
  623. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
  624. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/149, version=0.21.2)
  625. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH timeout: 60000
  626. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH of failed nodes is enabled
  627. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Stop all active resources: false
  628. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  629. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Default stickiness: 100
  630. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
  631. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  632. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_domains: Unpacking domains
  633. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
  634. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
  635. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/150, version=0.21.2)
  636. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
  637. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-18.raw
  638. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  639. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: native_rsc_location: Constraint (location-FirewallVM-rule) is not active (role : Master vs. Unknown)
  640. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device pdu1 is allowed on cluster1.verolengo.privatelan: score=0
  641. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
  642. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/151, version=0.21.2)
  643. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: build_port_aliases: Adding alias '6,cluster2.verolengo.privatelan'='7'
  644. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
  645. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/152)
  646. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
  647. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/153, version=0.21.2)
  648. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
  649. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 128e3a63756a46bf854111d9cd00dfa0)
  650. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
  651. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
  652. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 128e3a63756a46bf854111d9cd00dfa0 to disk
  653. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.IolctF (digest: /var/lib/pacemaker/cib/cib.IFvyyX)
  654. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
  655. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.IolctF
  656. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 05e4243a540acefb01413901a436766c
  657. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.2 with 0.21.2 from cluster2.verolengo.privatelan
  658. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
  659. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/192, version=0.21.2)
  660. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_RESULT: join-6
  661. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_PENDING cause=C_HA_MESSAGE origin=route_message ]
  662. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: Confirming join join-6: join_ack_nack
  663. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster1 after monitor op complete (interval=0)
  664. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after start op complete (interval=0)
  665. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
  666. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
  667. Jul 02 21:29:34 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
  668. Jul 02 21:29:34 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 152 for probe_complete=true passed
  669. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-19.raw
  670. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  671. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: ae291173e3889308a47af0b4e483e71e)
  672. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest ae291173e3889308a47af0b4e483e71e to disk
  673. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.MfJ2tF (digest: /var/lib/pacemaker/cib/cib.4MLfAX)
  674. Jul 02 21:29:34 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.MfJ2tF
  675. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
  676. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
  677. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_device_remove: Removed 'ilocluster1' from the device list (2 active devices)
  678. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster1 is allowed on cluster1.verolengo.privatelan: score=0
  679. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
  680. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
  681. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
  682. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
  683. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Added 'ilocluster1' to the device list (3 active devices)
  684. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster2 is allowed on cluster1.verolengo.privatelan: score=0
  685. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
  686. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
  687. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
  688. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
  689. Jul 02 21:29:34 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'ilocluster2' already existed in device list (3 active devices)
  690. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: result = 0
  691. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after monitor op complete (interval=60000)
  692. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource pdu1 after start op complete (interval=0)
  693. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
  694. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
  695. Jul 02 21:29:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
  696. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected c907b826d5889d153c0bf9c87584e575, calculated ebb1cd7477ccf2125b2b70c0a4d4c7c4
  697. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.21.2 -> 0.21.3 from cluster2.verolengo.privatelan not applied to 0.21.2: Failed application of an update diff
  698. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
  699. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.3 -> 0.21.4 (sync in progress)
  700. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 5ff8babd23b657f0fc9c197dd9785eee
  701. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.2 with 0.21.4 from cluster2.verolengo.privatelan
  702. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
  703. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.21.2 -> 0.21.4 from cluster2.verolengo.privatelan
  704. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.4)
  705. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
  706. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
  707. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/154, version=0.21.4)
  708. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
  709. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.2
  710. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.4 5ff8babd23b657f0fc9c197dd9785eee
  711. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="2">
  712. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  713. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_cib_replaced" id="cluster2.verolengo.privatelan"/>
  714. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  715. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  716. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="4" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
  717. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  718. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_lrm_query_internal" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
  719. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  720. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  721. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
  722. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/155, version=0.21.4)
  723. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
  724. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
  725. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/156, version=0.21.4)
  726. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
  727. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
  728. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/157, version=0.21.4)
  729. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 5ff8babd23b657f0fc9c197dd9785eee, calculated b016b615a6579482c48b722d79c29a94
  730. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
  731. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.21.2 -> 0.21.4 from local not applied to 0.21.2: Failed application of an update diff
  732. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xbff710
  733. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
  734. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/158, version=0.21.4)
  735. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
  736. Jul 02 21:29:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  737. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
  738. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/159, version=0.21.4)
  739. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
  740. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.21.4)
  741. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/160)
  742. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-20.raw
  743. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  744. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
  745. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/161, version=0.21.4)
  746. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
  747. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 5bc617c273b258d284f9094db00da144)
  748. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 5bc617c273b258d284f9094db00da144 to disk
  749. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.9ykBDH (digest: /var/lib/pacemaker/cib/cib.Y4pmT1)
  750. Jul 02 21:29:35 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.9ykBDH
  751. Jul 02 21:29:35 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 160 for probe_complete=true passed
  752. Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
  753. Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 92
  754. Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
  755. Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 92
  756. Jul 02 21:29:36 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
  757. Jul 02 21:29:36 corosync [CMAN ] memb: command return code is 0
  758. Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 576
  759. Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
  760. Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
  761. Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
  762. Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
  763. Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 92
  764. Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
  765. Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 92
  766. Jul 02 21:29:36 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
  767. Jul 02 21:29:36 corosync [CMAN ] memb: command return code is 0
  768. Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 576
  769. Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
  770. Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
  771. Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
  772. Jul 02 21:29:36 corosync [CMAN ] daemon: read 20 bytes from fd 42
  773. Jul 02 21:29:36 corosync [CMAN ] daemon: client command is 5
  774. Jul 02 21:29:36 corosync [CMAN ] daemon: About to process command
  775. Jul 02 21:29:36 corosync [CMAN ] memb: command to process is 5
  776. Jul 02 21:29:36 corosync [CMAN ] daemon: Returning command data. length = 0
  777. Jul 02 21:29:36 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
  778. Jul 02 21:29:36 corosync [CMAN ] daemon: read 0 bytes from fd 42
  779. Jul 02 21:29:36 corosync [CMAN ] daemon: Freed 0 queued messages
  780. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: result = 0
  781. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource pdu1 after monitor op complete (interval=60000)
  782. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: join-6: Join complete. Sending local LRM status to cluster2.verolengo.privatelan
  783. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: attrd_update_delegate: Sent update: (null)=(null) for localhost
  784. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_NOT_DC: [ state=S_PENDING cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  785. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
  786. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  787. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  788. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  789. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
  790. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
  791. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/162, version=0.21.4)
  792. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
  793. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/163)
  794. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: apply_xml_diff: Digest mis-match: expected 8ba4bd1fcf884c4d284d1e0c5cfeaf8a, calculated eae8139bf51884dc2056be813b817c73
  795. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: warning: cib_process_diff: Diff 0.21.4 -> 0.21.5 from cluster2.verolengo.privatelan not applied to 0.21.4: Failed application of an update diff
  796. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_server_process_diff: Requesting re-sync from peer
  797. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 163 for probe_complete=true passed
  798. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: notice: cib_server_process_diff: Not applying diff 0.21.5 -> 0.21.6 (sync in progress)
  799. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Digest matched on replace from cluster2.verolengo.privatelan: 95ac07c19ee48d454e3be3a2037a97d2
  800. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_replace: Replaced 0.21.4 with 0.21.6 from cluster2.verolengo.privatelan
  801. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_replace op
  802. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_replace_notify: Replaced: 0.21.4 -> 0.21.6 from cluster2.verolengo.privatelan
  803. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=cluster2.verolengo.privatelan/cluster1.verolengo.privatelan/(null), version=0.21.6)
  804. Jul 02 21:29:36 [16343] cluster1.verolengo.privatelan crmd: debug: do_cib_replaced: Updating the CIB after a replace: DC=false
  805. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: info: do_cib_replaced: Updating all attributes after cib_refresh_notify event
  806. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.4
  807. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.6 95ac07c19ee48d454e3be3a2037a97d2
  808. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="4">
  809. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  810. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_cib_replaced" id="cluster1.verolengo.privatelan"/>
  811. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  812. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  813. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="6" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
  814. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  815. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_lrm_query_internal" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
  816. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  817. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  818. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: info: apply_xml_diff: Digest mis-match: expected 95ac07c19ee48d454e3be3a2037a97d2, calculated 0ab4e21fca19df035231f64325d55068
  819. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: warning: cib_process_diff: Diff 0.21.4 -> 0.21.6 from local not applied to 0.21.4: Failed application of an update diff
  820. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) 0xc692d0
  821. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  822. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/200, version=0.21.7)
  823. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.21.7)
  824. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown'] does not exist
  825. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/164, version=0.21.7)
  826. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for shutdown=(null) passed
  827. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2'] does not exist
  828. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/165, version=0.21.7)
  829. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster2=(null) passed
  830. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate'] does not exist
  831. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='terminate']: No such device or address (rc=-6, origin=local/attrd/166, version=0.21.7)
  832. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for terminate=(null) passed
  833. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1'] does not exist
  834. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/167, version=0.21.7)
  835. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.6
  836. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.7 aed76545c3632e0c41161416395800bf
  837. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="6">
  838. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  839. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_lrm_query_internal" id="cluster2.verolengo.privatelan"/>
  840. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_lrm_query_internal" id="cluster1.verolengo.privatelan"/>
  841. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  842. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  843. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster1=(null) passed
  844. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="7" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
  845. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  846. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_state_transition" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan"/>
  847. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <node_state crm-debug-origin="do_state_transition" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan"/>
  848. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  849. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  850. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1'] does not exist
  851. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_process_diff: Diff 0.21.6 -> 0.21.7 from local not applied to 0.21.7: current "num_updates" is greater than required
  852. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: debug: cib_apply_patch_event: Update didn't apply: Application of an update diff failed (-206) (nil)
  853. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='last-failure-ilocluster1']: No such device or address (rc=-6, origin=local/attrd/168, version=0.21.7)
  854. Jul 02 21:29:36 [16339] cluster1.verolengo.privatelan stonith-ng: notice: update_cib_cache_cb: [cib_diff_notify] Patch aborted: Application of an update diff failed (-206)
  855. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for last-failure-ilocluster1=(null) passed
  856. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/7, version=0.21.7)
  857. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-21.raw
  858. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  859. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
  860. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/169, version=0.21.7)
  861. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_complete" name="probe_complete" value="true"/>
  862. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/170)
  863. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='cluster1.verolengo.privatelan']
  864. //transient_attributes//nvpair[@name='fail-count-ilocluster2'] does not exist
  865. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: 53c9f22b43e74eb
  866. 8824fa08c90f1c438)
  867. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_sta
  868. te[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='fail-count-ilocluster2']: No such device or address (rc=-6, origin=local/attrd/171, version
  869. =0.21.7)
  870. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update -6 for fail-count-ilocluster2=(null) passed
  871. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 53c9f22b43e74eb8824fa08c90f1c438 to disk
  872. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.6cnWLJ (d
  873. igest: /var/lib/pacemaker/cib/cib.aew295)
  874. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='cluster1.verol
  875. engo.privatelan']//transient_attributes//nvpair[@name='probe_complete'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair)
  876. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_sta
  877. te[@id='cluster1.verolengo.privatelan']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/172, version=0.21.7)
  878. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: find_nvpair_attr_delegate: Match <nvpair id="status-cluster1.verolengo.privatelan-probe_c
  879. omplete" name="probe_complete" value="true"/>
  880. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (or
  881. igin=local/attrd/173)
  882. Jul 02 21:29:36 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.6cnWLJ
  883. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 170 for probe_complete=true passed
  884. Jul 02 21:29:36 [16341] cluster1.verolengo.privatelan attrd: debug: attrd_cib_callback: Update 173 for probe_complete=true passed
  885. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVMDisk' not found (3 active resources)
  886. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
  887. c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  888. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVMDisk:1' not found (3 active resources)
  889. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
  890. c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  891. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_rsc_register: Added 'FirewallVMDisk' to the rsc list (4 active resources)
  892. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_register operation from 4791ee58-b80b-4b8b-ab
  893. a3-9c1bf565c6b3: rc=0, reply=1, notify=1, exit=4201792
  894. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
  895. c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  896. Jul 02 21:29:37 [16343] cluster1.verolengo.privatelan crmd: info: do_lrm_rsc_op: Performing key=7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf op=FirewallVM
  897. Disk_monitor_0
  898. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 4791ee58-b80b-4b8b-aba3-9
  899. c1bf565c6b3: rc=59, reply=1, notify=0, exit=4201792
  900. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: log_execute: executing - rsc:FirewallVMDisk action:monitor call_id:59
  901. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_get_rsc_info: Resource 'FirewallVM' not found (4 active resources)
  902. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9
  903. c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  904. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: info: process_lrmd_rsc_register: Added 'FirewallVM' to the rsc list (5 active resources)
  905. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_register operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=1, notify=1, exit=4201792
  906. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  907. Jul 02 21:29:37 [16343] cluster1.verolengo.privatelan crmd: info: do_lrm_rsc_op: Performing key=8:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf op=FirewallVM_monitor_0
  908. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=63, reply=1, notify=0, exit=4201792
  909. Jul 02 21:29:37 [16340] cluster1.verolengo.privatelan lrmd: debug: log_execute: executing - rsc:FirewallVM action:monitor call_id:63
  910. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447 - exited with rc=7
  911. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447:stderr [ -- empty -- ]
  912. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVMDisk_monitor_0:7447:stdout [ -- empty -- ]
  913. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: log_finished: finished - rsc:FirewallVMDisk action:monitor call_id:59 pid:7447 exit-code:7 exec-time:107ms queue-time:0ms
  914. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  915. Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: do_update_resource: Updating resource FirewallVMDisk after monitor op complete (interval=0)
  916. Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 7482 exited with rc=0
  917. Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: notice: process_lrm_event: LRM operation FirewallVMDisk_monitor_0 (call=59, rc=7, cib-update=36, confirmed=true) not running
  918. Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: update_history_cache: Updating history for 'FirewallVMDisk' with monitor op
  919. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/36)
  920. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/36, version=0.21.8)
  921. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.7
  922. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.8 51eb1f7a384b4bdfc7122f1fa91340f7
  923. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="7">
  924. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  925. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_state_transition" id="cluster1.verolengo.privatelan"/>
  926. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  927. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  928. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="8" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
  929. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  930. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <node_state crm-debug-origin="do_update_resource" crmd="online" expected="member" id="cluster1.verolengo.privatelan" in_ccm="true" join="member" uname="cluster1.verolengo.privatelan">
  931. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm id="cluster1.verolengo.privatelan">
  932. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm_resources>
  933. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_resource id="FirewallVMDisk" type="drbd" class="ocf" provider="linbit">
  934. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_rsc_op id="FirewallVMDisk_last_0" operation_key="FirewallVMDisk_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" transition-magic="0:7;7:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" call-id="59" rc-code="7" op-status="0" interval="0" last-run="1404329377" last-rc-change="1404329377" exec-time="1
  935. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </lrm_resource>
  936. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm_resources>
  937. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm>
  938. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </node_state>
  939. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  940. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  941. VirtualDomain(FirewallVM)[7448]: 2014/07/02_21:29:38 DEBUG: Virtual domain firewall is currently shut off.
  942. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=cluster2.verolengo.privatelan/crmd/204, version=0.21.9)
  943. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.8
  944. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.21.9 6f2eb1952277bd4cf8fe08b23d60e602
  945. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <cib num_updates="8">
  946. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - <status>
  947. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <node_state crm-debug-origin="do_state_transition" id="cluster2.verolengo.privatelan"/>
  948. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </status>
  949. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: - </cib>
  950. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:33 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="21" have-quorum="1" num_updates="9" update-client="cibadmin" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2">
  951. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <status>
  952. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <node_state crm-debug-origin="do_update_resource" crmd="online" expected="member" id="cluster2.verolengo.privatelan" in_ccm="true" join="member" uname="cluster2.verolengo.privatelan">
  953. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm id="cluster2.verolengo.privatelan">
  954. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <lrm_resources>
  955. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_resource id="FirewallVMDisk" type="drbd" class="ocf" provider="linbit">
  956. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <lrm_rsc_op id="FirewallVMDisk_last_0" operation_key="FirewallVMDisk_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="10:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" transition-magic="0:7;10:40:7:198872dc-c6f8-4f02-ac5b-f38f4afe0bbf" call-id="53" rc-code="7" op-status="0" interval="0" last-run="1404329376" last-rc-change="1404329376" exec-time=
  957. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </lrm_resource>
  958. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm_resources>
  959. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </lrm>
  960. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </node_state>
  961. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </status>
  962. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  963. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7519 id=d8299391-583e-4188-9823-579b7cf339ca
  964. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7519-14)
  965. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7519]
  966. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  967. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  968. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  969. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  970. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  971. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  972. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
  973. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.21.9)
  974. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
  975. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
  976. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
  977. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  978. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
  979. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
  980. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  981. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
  982. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
  983. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
  984. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
  985. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
  986. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
  987. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
  988. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: dump_resource_attr: Looking up cpu in FirewallVM
  989. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
  990. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
  991. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7519-14-header
  992. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7519-14-header
  993. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7519-14)
  994. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7519-14-header
  995. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7519-14) state:2
  996. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
  997. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: warning: main: Error performing operation: No such device or address
  998. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7519-14-header
  999. Jul 02 21:29:38 [7519] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1000. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7519-14-header
  1001. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7519-14-header
  1002. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7521 id=2eea3180-c533-4911-871b-7203ebbb7ea7
  1003. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7521-14)
  1004. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7521]
  1005. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1006. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1007. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1008. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1009. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1010. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1011. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
  1012. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.21.9)
  1013. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
  1014. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
  1015. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
  1016. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  1017. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
  1018. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
  1019. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  1020. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
  1021. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
  1022. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
  1023. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
  1024. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
  1025. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
  1026. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
  1027. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="cpu"] does not exist
  1028. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="cpu"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.21.9)
  1029. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
  1030. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.21.9)
  1031. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <primitive id="FirewallVM">
  1032. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <utilization id="FirewallVM-utilization">
  1033. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <nvpair id="FirewallVM-utilization-cpu" name="cpu" value="1"/>
  1034. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </utilization>
  1035. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </primitive>
  1036. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section resources to master (origin=local/crm_resource/5)
  1037. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
  1038. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=cluster2.verolengo.privatelan/crm_resource/5, version=0.22.1)
  1039. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: --- 0.21.9
  1040. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
  1041. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: Diff: +++ 0.22.1 02f47d524d7d6e1940f81bbceb545514
  1042. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
  1043. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: -- <cib epoch="21" num_updates="9" admin_epoch="0"/>
  1044. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7521-14-header
  1045. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <cib admin_epoch="0" cib-last-written="Wed Jul 2 21:29:38 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="22" have-quorum="1" num_updates="1" update-client="crm_resource" update-origin="cluster2.verolengo.privatelan" validate-with="pacemaker-1.2">
  1046. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <configuration>
  1047. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <resources>
  1048. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7521-14-header
  1049. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + <primitive class="ocf" id="FirewallVM" provider="heartbeat" type="VirtualDomain">
  1050. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7521-14-header
  1051. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <utilization id="FirewallVM-utilization">
  1052. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ <nvpair id="FirewallVM-utilization-cpu" name="cpu" value="1"/>
  1053. Jul 02 21:29:38 [7521] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1054. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: ++ </utilization>
  1055. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </primitive>
  1056. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7521-14)
  1057. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </resources>
  1058. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7521-14) state:2
  1059. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </configuration>
  1060. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: Config update: + </cib>
  1061. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
  1062. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7521-14-header
  1063. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7521-14-header
  1064. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7521-14-header
  1065. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: update_cib_stonith_devices: Updating device list from the cib: new resource
  1066. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH timeout: 60000
  1067. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: STONITH of failed nodes is enabled
  1068. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Stop all active resources: false
  1069. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  1070. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Default stickiness: 100
  1071. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: notice: unpack_config: On loss of CCM Quorum: Ignore
  1072. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  1073. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: unpack_domains: Unpacking domains
  1074. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: native_rsc_location: Constraint (location-FirewallVM-rule) is not active (role : Master vs. Unknown)
  1075. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-22.raw
  1076. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device pdu1 is allowed on cluster1.verolengo.privatelan: score=0
  1077. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  1078. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: build_port_aliases: Adding alias '6,cluster2.verolengo.privatelan'='7'
  1079. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_apc (target=(null))
  1080. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
  1081. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
  1082. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.22.0 of the CIB to disk (digest: 984a57adfc11c3f9f0bfc89feab27302)
  1083. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 984a57adfc11c3f9f0bfc89feab27302 to disk
  1084. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ZuNewM (digest: /var/lib/pacemaker/cib/cib.HGeHEb)
  1085. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.ZuNewM
  1086. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7529 id=3e2870c9-d691-4390-a960-a8554f85ea72
  1087. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7529-14)
  1088. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7529]
  1089. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1090. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1091. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1092. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1093. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1094. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1095. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
  1096. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.22.1)
  1097. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
  1098. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
  1099. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
  1100. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  1101. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
  1102. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
  1103. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  1104. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
  1105. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
  1106. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
  1107. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
  1108. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
  1109. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
  1110. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
  1111. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: dump_resource_attr: Looking up hv_memory in FirewallVM
  1112. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
  1113. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
  1114. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7529-14-header
  1115. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7529-14-header
  1116. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7529-14-header
  1117. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7529-14)
  1118. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7529-14) state:2
  1119. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: warning: main: Error performing operation: No such device or address
  1120. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
  1121. Jul 02 21:29:38 [7529] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1122. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7529-14-header
  1123. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7529-14-header
  1124. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7529-14-header
  1125. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_new: Connecting 0x16d7890 for uid=0 gid=0 pid=7531 id=eb8a4d5e-3947-4b58-a051-619f4ae3d652
  1126. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: handle_new_connection: IPC credentials authenticated (16338-7531-14)
  1127. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_shm_connect: connecting to client [7531]
  1128. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1129. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1130. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1131. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1132. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1133. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_open_2: shm size:524301; real_size:528384; rb->word_size:132096
  1134. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signon_raw: Connection to CIB successful
  1135. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.22.1)
  1136. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: result = 0
  1137. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: notice: stonith_device_register: Device 'pdu1' already existed in device list (3 active devices)
  1138. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_device_remove: Removed 'ilocluster1' from the device list (2 active devices)
  1139. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: cib_device_update: Device ilocluster1 is allowed on cluster1.verolengo.privatelan: score=0
  1140. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
  1141. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: forking
  1142. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH timeout: 60000
  1143. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: STONITH of failed nodes is enabled
  1144. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Stop all active resources: false
  1145. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  1146. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Default stickiness: 100
  1147. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: notice: unpack_config: On loss of CCM Quorum: Ignore
  1148. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  1149. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: unpack_domains: Unpacking domains
  1150. Jul 02 21:29:38 [16339] cluster1.verolengo.privatelan stonith-ng: debug: internal_stonith_action_execute: sending args
  1151. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster2.verolengo.privatelan is active
  1152. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster2.verolengo.privatelan is online
  1153. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status_fencing: Node cluster1.verolengo.privatelan is active
  1154. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: determine_online_status: Node cluster1.verolengo.privatelan is online
  1155. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster2.verolengo.privatelan to FirewallVMDisk:0
  1156. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: find_anonymous_clone: Internally renamed FirewallVMDisk on cluster1.verolengo.privatelan to FirewallVMDisk:0
  1157. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: cib_query: //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="hv_memory"] does not exist
  1158. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="FirewallVM"]/utilization//nvpair[@name="hv_memory"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.22.1)
  1159. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for /cib (/cib)
  1160. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.22.1)
  1161. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <primitive id="FirewallVM">
  1162. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <utilization id="FirewallVM-utilization">
  1163. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update <nvpair id="FirewallVM-utilization-hv_memory" name="hv_memory" value="1024"/>
  1164. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </utilization>
  1165. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: set_resource_attr: Update </primitive>
  1166. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Forwarding cib_modify operation for section resources to master (origin=local/crm_resource/5)
  1167. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: activateCibXml: Triggering CIB write for cib_apply_diff op
  1168. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=cluster2.verolengo.privatelan/crm_resource/5, version=0.23.1)
  1169. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: cib_native_signoff: Signing out of the CIB Service
  1170. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_ipcc_disconnect: qb_ipcc_disconnect()
  1171. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7531-14-header
  1172. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7531-14-header
  1173. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: debug: qb_rb_close: Closing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7531-14-header
  1174. Jul 02 21:29:38 [7531] cluster1.verolengo.privatelan crm_resource: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1175. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_dispatch_connection_request: HUP conn (16338-7531-14)
  1176. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16338-7531-14) state:2
  1177. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: crm_client_destroy: Destroying 0 events
  1178. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-16338-7531-14-header
  1179. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-16338-7531-14-header
  1180. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-16338-7531-14-header
  1181. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448 - exited with rc=7
  1182. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448:stderr [ -- empty -- ]
  1183. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: operation_finished: FirewallVM_monitor_0:7448:stdout [ -- empty -- ]
  1184. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: log_finished: finished - rsc:FirewallVM action:monitor call_id:63 pid:7448 exit-code:7 exec-time:347ms queue-time:0ms
  1185. Jul 02 21:29:38 [16340] cluster1.verolengo.privatelan lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 4791ee58-b80b-4b8b-aba3-9c1bf565c6b3: rc=0, reply=0, notify=0, exit=4201792
  1186. Jul 02 21:29:38 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: do_update_resource: Updating resource FirewallVM after monitor op complete (interval=0)
  1187. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-23.raw
  1188. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Writing CIB to disk
  1189. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: write_cib_contents: Wrote version 0.23.0 of the CIB to disk (digest: 44fe45fef61c86411eb51e4314d1e883)
  1190. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Wrote digest 44fe45fef61c86411eb51e4314d1e883 to disk
  1191. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.jrhLEM (digest: /var/lib/pacemaker/cib/cib.eIQIVb)
  1192. Jul 02 21:29:38 [16338] cluster1.verolengo.privatelan cib: debug: write_cib_contents: Activating /var/lib/pacemaker/cib/cib.jrhLEM
  1193. Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1194. Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 90
  1195. Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
  1196. Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 90
  1197. Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 0
  1198. Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 440
  1199. Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000090 to fd 42
  1200. Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
  1201. Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
  1202. Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1203. Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 5
  1204. Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
  1205. Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 5
  1206. Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 0
  1207. Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
  1208. Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
  1209. Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
  1210. Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1211. Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 7
  1212. Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
  1213. Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 7
  1214. Jul 02 21:29:38 corosync [CMAN ] memb: get_all_members: retlen = 880
  1215. Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 2
  1216. Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 880
  1217. Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
  1218. Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1219. Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 7
  1220. Jul 02 21:29:38 corosync [CMAN ] daemon: About to process command
  1221. Jul 02 21:29:38 corosync [CMAN ] memb: command to process is 7
  1222. Jul 02 21:29:38 corosync [CMAN ] memb: get_all_members: retlen = 880
  1223. Jul 02 21:29:38 corosync [CMAN ] memb: command return code is 2
  1224. Jul 02 21:29:38 corosync [CMAN ] daemon: Returning command data. length = 880
  1225. Jul 02 21:29:38 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
  1226. Jul 02 21:29:38 corosync [CMAN ] daemon: read 0 bytes from fd 42
  1227. Jul 02 21:29:38 corosync [CMAN ] daemon: Freed 0 queued messages
  1228. Jul 02 21:29:38 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1229. Jul 02 21:29:38 corosync [CMAN ] daemon: client command is 91
  1230. ...
  1231. ...
  1232. Jul 02 21:32:35 corosync [TOTEM ] entering GATHER state from 0.
  1233. Jul 02 21:32:35 corosync [TOTEM ] Creating commit token because I am the rep.
  1234. Jul 02 21:32:35 corosync [TOTEM ] Saving state aru 4f4 high seq received 4f4
  1235. Jul 02 21:32:35 corosync [TOTEM ] Storing new sequence id for ring c
  1236. Jul 02 21:32:35 corosync [TOTEM ] entering COMMIT state.
  1237. Jul 02 21:32:35 corosync [TOTEM ] got commit token
  1238. Jul 02 21:32:35 corosync [TOTEM ] entering RECOVERY state.
  1239. Jul 02 21:32:35 corosync [TOTEM ] TRANS [0] member 172.16.100.1:
  1240. Jul 02 21:32:35 corosync [TOTEM ] position [0] member 172.16.100.1:
  1241. Jul 02 21:32:35 corosync [TOTEM ] previous ring seq 8 rep 172.16.100.1
  1242. Jul 02 21:32:35 corosync [TOTEM ] aru 4f4 high delivered 4f4 received flag 1
  1243. Jul 02 21:32:35 corosync [TOTEM ] Did not need to originate any messages in recovery.
  1244. Jul 02 21:32:35 corosync [TOTEM ] got commit token
  1245. Jul 02 21:32:35 corosync [TOTEM ] Sending initial ORF token
  1246. Jul 02 21:32:35 corosync [TOTEM ] got commit token
  1247. Jul 02 21:32:35 corosync [TOTEM ] Sending initial ORF token
  1248. Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
  1249. Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
  1250. Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
  1251. Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
  1252. Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
  1253. Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
  1254. Jul 02 21:32:35 corosync [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
  1255. Jul 02 21:32:35 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
  1256. Jul 02 21:32:35 corosync [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
  1257. Jul 02 21:32:35 corosync [TOTEM ] Resetting old ring state
  1258. Jul 02 21:32:35 corosync [TOTEM ] recovery to regular 1-0
  1259. Jul 02 21:32:35 corosync [CMAN ] ais: confchg_fn called type = 1, seq=12
  1260. Jul 02 21:32:35 corosync [CMAN ] memb: del_ais_node 2
  1261. Jul 02 21:32:35 corosync [CMAN ] memb: del_ais_node cluster2.verolengo.privatelan, leave_reason=1
  1262. Jul 02 21:32:35 corosync [QUORUM] Members[1]: 1
  1263. Jul 02 21:32:35 corosync [QUORUM] sending quorum notification to (nil), length = 52
  1264. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 23
  1265. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 25
  1266. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 30
  1267. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 102 to fd 39
  1268. Jul 02 21:32:35 corosync [TOTEM ] waiting_trans_ack changed to 1
  1269. Jul 02 21:32:35 corosync [CMAN ] ais: confchg_fn called type = 0, seq=12
  1270. Jul 02 21:32:35 corosync [CMAN ] ais: last memb_count = 2, current = 1
  1271. Jul 02 21:32:35 corosync [CMAN ] memb: sending TRANSITION message. cluster_name = vclu
  1272. Jul 02 21:32:35 corosync [CMAN ] ais: comms send message 0x7fffb697d0c0 len = 65
  1273. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 103 to fd 39
  1274. Jul 02 21:32:35 corosync [SYNC ] This node is within the primary component and will provide service.
  1275. Jul 02 21:32:35 corosync [TOTEM ] entering OPERATIONAL state.
  1276. Jul 02 21:32:35 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
  1277. Jul 02 21:32:35 corosync [CMAN ] ais: deliver_fn source nodeid = 1, len=81, endian_conv=0
  1278. Jul 02 21:32:35 corosync [CMAN ] memb: Message on port 0 is 5
  1279. Jul 02 21:32:35 corosync [CMAN ] memb: got TRANSITION from node 1
  1280. Jul 02 21:32:35 corosync [CMAN ] memb: Got TRANSITION message. msg->flags=20, node->flags=20, first_trans=0
  1281. Jul 02 21:32:35 corosync [CMAN ] memb: add_ais_node ID=1, incarnation = 12
  1282. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1283. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1284. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1285. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1286. Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy CLM service)
  1287. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
  1288. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1289. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1290. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1291. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1292. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1293. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 25
  1294. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 30
  1295. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1296. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1297. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1298. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1299. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1300. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 30
  1301. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
  1302. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
  1303. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1304. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
  1305. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
  1306. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 25
  1307. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1308. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1309. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1310. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: pcmk_cpg_membership: Left[2.0] stonith-ng.2
  1311. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
  1312. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: st_peer_update_callback: Broadcasting our uname because of node 2
  1313. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: pcmk_cpg_membership: Left[2.0] cib.2
  1314. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
  1315. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: pcmk_cpg_membership: Member[2.0] cib.1
  1316. Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: pcmk_cpg_membership: Left[2.0] attrd.2
  1317. Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
  1318. Jul 02 21:32:35 [16341] cluster1.verolengo.privatelan attrd: info: pcmk_cpg_membership: Member[2.0] attrd.1
  1319. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: cman_event_callback: Membership 12: quorum retained
  1320. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1321. Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy CLM service)
  1322. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: crm_update_peer_state: cman_event_callback: Node cluster2.verolengo.privatelan[2] - state is now lost (was member)
  1323. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: peer_update_callback: cluster2.verolengo.privatelan is now lost (was member)
  1324. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: post_cache_update: Updated cache after membership event 12.
  1325. Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy AMF service)
  1326. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 30
  1327. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: warning: reap_dead_nodes: Our DC node (cluster2.verolengo.privatelan) left the cluster
  1328. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1329. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
  1330. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1331. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1332. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1333. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1334. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1335. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 30
  1336. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 25
  1337. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1338. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1339. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1340. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1341. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1342. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1343. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 25
  1344. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1345. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1346. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1347. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1348. Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy AMF service)
  1349. Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (openais checkpoint service B.01.01)
  1350. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 39
  1351. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1352. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1353. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1354. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1355. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1356. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 39
  1357. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1358. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1359. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1360. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1361. Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (openais checkpoint service B.01.01)
  1362. Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (dummy EVT service)
  1363. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
  1364. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1365. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1366. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1367. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1368. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1369. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 23
  1370. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1371. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1372. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1373. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1374. Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (dummy EVT service)
  1375. Jul 02 21:32:35 corosync [SYNC ] Synchronization actions starting for (corosync cluster closed process group service v1.01)
  1376. Jul 02 21:32:35 corosync [CPG ] comparing: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
  1377. Jul 02 21:32:35 corosync [CPG ] chosen downlist: sender r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) ; members(old:2 left:1)
  1378. Jul 02 21:32:35 corosync [CPG ] got joinlist message from node 0x1
  1379. Jul 02 21:32:35 corosync [SYNC ] confchg entries 1
  1380. Jul 02 21:32:35 corosync [SYNC ] Barrier Start Received From 1
  1381. Jul 02 21:32:35 corosync [SYNC ] Barrier completion status for nodeid 1 = 1.
  1382. Jul 02 21:32:35 corosync [SYNC ] Synchronization barrier completed
  1383. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[0] group:crmd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16343
  1384. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[1] group:attrd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16341
  1385. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16339
  1386. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[3] group:cib\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16338
  1387. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[4] group:pacemakerd\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:16332
  1388. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[5] group:gfs:controld\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14955
  1389. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[6] group:dlm:controld\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14904
  1390. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[7] group:fenced:default\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14883
  1391. Jul 02 21:32:35 corosync [CPG ] joinlist_messages[8] group:fenced:daemon\x00, ip:r(0) ip(172.16.100.1) r(1) ip(172.16.1.211) , pid:14883
  1392. Jul 02 21:32:35 corosync [SYNC ] Committing synchronization for (corosync cluster closed process group service v1.01)
  1393. Jul 02 21:32:35 corosync [MAIN ] Completed service synchronization, ready to provide service.
  1394. Jul 02 21:32:35 corosync [TOTEM ] waiting_trans_ack changed to 0
  1395. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
  1396. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
  1397. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.1
  1398. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_ELECTION: [ state=S_NOT_DC cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
  1399. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_query 0 from cluster1.verolengo.privatelan ( 0)
  1400. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: create_remote_stonith_op: 431c73e1-feab-4f4a-b34d-5da097144e67 already exists
  1401. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_query: Query <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="431c73e1-feab-4f4a-b34d-5da097144e67" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="431c73e1-feab-4f4a-b34d-5da097144e67" st_target="cluster2.verolengo.privatelan" st_device_action="reboot" st_origin="cluster1.verolengo.privatelan" st_clientid="09c220bd-a4e4-4321-bb72-9c60c6d14bc3" st_clientname="stonith_admin.cman.8331" st
  1402. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: get_capable_devices: Searching through 3 devices to see what is capable of action (reboot) for target cluster2.verolengo.privatelan
  1403. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1404. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
  1405. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
  1406. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 23
  1407. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=reap_dead_nodes ]
  1408. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: ilocluster1 can not fence cluster2.verolengo.privatelan: static-list
  1409. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Unset DC. Was cluster2.verolengo.privatelan
  1410. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_uptime: Current CPU usage is: 0s, 253961us
  1411. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 39
  1412. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1413. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: ilocluster2 can fence cluster2.verolengo.privatelan: static-list
  1414. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: notice: can_fence_host_with_device: pdu1 can fence cluster2.verolengo.privatelan: static-list
  1415. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1416. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_vote: Started election 3
  1417. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: search_devices_record_result: Finished Search. 2 devices can perform action (reboot) on node cluster2.verolengo.privatelan
  1418. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1419. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Election Timeout (I_ELECTION_DC:120000ms), src=49
  1420. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_query_capable_device_cb: Found 2 matching devices for 'cluster2.verolengo.privatelan'
  1421. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Still waiting on 1 non-votes (1 total)
  1422. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1423. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: pcmk_cpg_membership: Left[2.0] crmd.2
  1424. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1425. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node cluster2.verolengo.privatelan[2] - corosync-cpg is now offline
  1426. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1427. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: peer_update_callback: Client cluster2.verolengo.privatelan/peer now has status [offline] (DC=<null>)
  1428. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_query from cluster1.verolengo.privatelan: OK (0)
  1429. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 39
  1430. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: pcmk_cpg_membership: Member[2.0] crmd.1
  1431. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
  1432. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1433. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1434. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1435. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1436. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_notify reply 0 from cluster1.verolengo.privatelan ( 0)
  1437. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_count_vote: Created voted hash
  1438. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1439. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_count_vote: Election 3 (current: 3, owner: cluster1.verolengo.privatelan): Processed vote from cluster1.verolengo.privatelan (Recorded)
  1440. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_election_check: Destroying voted hash
  1441. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1442. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: process_remote_stonith_exec: Marking call to reboot for cluster2.verolengo.privatelan on behalf of stonith_admin.cman.8331@431c73e1-feab-4f4a-b34d-5da097144e67.cluster1: Timer expired (-62)
  1443. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_check ]
  1444. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
  1445. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 23
  1446. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: error: remote_op_done: Operation reboot of cluster2.verolengo.privatelan by cluster1.verolengo.privatelan for stonith_admin.cman.8331@cluster1.verolengo.privatelan.431c73e1: Timer expired
  1447. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
  1448. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_te_control: Registering TE UUID: e00f9cce-2413-4314-aa71-d67a9a71ebc8
  1449. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 23
  1450. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_notify reply from cluster1.verolengo.privatelan: OK (0)
  1451. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
  1452. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1453. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
  1454. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: stonith_command: Processing st_query reply 0 from cluster1.verolengo.privatelan ( 0)
  1455. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1456. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: process_remote_stonith_query: Query result 1 of 2 from cluster1.verolengo.privatelan (2 devices)
  1457. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
  1458. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: stonith_command: Processed st_query reply from cluster1.verolengo.privatelan: OK (0)
  1459. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 23
  1460. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for crmd (a8651380-7c7e-4b12-b48a-f07f70ea8b06): on
  1461. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: set_graph_functions: Setting custom graph functions
  1462. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_te_control: Transitioner is now active
  1463. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
  1464. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: info: crm_client_new: Connecting 0x1167ad0 for uid=0 gid=0 pid=16343 id=93e77802-a7e6-4839-aa81-477d6cfd0bce
  1465. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: handle_new_connection: IPC credentials authenticated (16342-16343-6)
  1466. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_ipcs_shm_connect: connecting to client [16343]
  1467. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1468. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_ipcs_dispatch_connection_request: HUP conn (16339-8331-12)
  1469. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(16339-8331-12) state:2
  1470. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: info: crm_client_destroy: Destroying 0 events
  1471. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-response-16339-8331-12-header
  1472. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-event-16339-8331-12-header
  1473. Jul 02 21:32:35 [16339] cluster1.verolengo.privatelan stonith-ng: debug: qb_rb_close: Free'ing ringbuffer: /dev/shm/qb-stonith-ng-request-16339-8331-12-header
  1474. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1475. Jul 02 21:32:35 [16342] cluster1.verolengo.privatelan pengine: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1476. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1477. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1478. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: qb_rb_open_2: shm size:5242893; real_size:5246976; rb->word_size:1311744
  1479. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Integration Timer (I_INTEGRATED:180000ms), src=51
  1480. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_takeover: Taking over DC status for this partition
  1481. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_readwrite: We are now in R/W mode
  1482. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/40, version=0.23.19)
  1483. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/41, version=0.23.19)
  1484. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] (/cib/configuration/crm_config/cluster_property_set/nvpair[1])
  1485. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/42, version=0.23.19)
  1486. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.10-14.el6_5.3-368c726"/>
  1487. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/43, version=0.23.19)
  1488. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2])
  1489. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/44, version=0.23.19)
  1490. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="cman"/>
  1491. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: initialize_join: join-1: Initializing join data (flag=true)
  1492. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: join_make_offer: Making join offers based on membership 12
  1493. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: join_make_offer: join-1: Sending offer to cluster1.verolengo.privatelan
  1494. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: join_make_offer: Node cluster1.verolengo.privatelan[1] - join-1 phase 0 -> 1
  1495. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
  1496. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_OFFER: join-1
  1497. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: notice: tengine_stonith_notify: Peer cluster2.verolengo.privatelan was not terminated (reboot) by cluster1.verolengo.privatelan for cluster1.verolengo.privatelan: Timer expired (ref=431c73e1-feab-4f4a-b34d-5da097144e67) by client stonith_admin.cman.8331
  1498. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
  1499. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: update_dc: Set DC to cluster1.verolengo.privatelan (3.0.7)
  1500. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_offer_respond: do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
  1501. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/45, version=0.23.19)
  1502. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/46, version=0.23.19)
  1503. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/47, version=0.23.19)
  1504. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Call 46 : Parsing CIB options
  1505. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
  1506. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: config_query_callback: Checking for expired actions every 900000ms
  1507. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Respond to join offer join-1
  1508. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: join_query_callback: Acknowledging cluster1.verolengo.privatelan as our DC
  1509. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
  1510. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: Processing req from cluster1.verolengo.privatelan
  1511. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: join-1: Welcoming node cluster1.verolengo.privatelan (ref join_request-crmd-1404329555-36)
  1512. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: do_dc_join_filter_offer: Node cluster1.verolengo.privatelan[1] - join-1 phase 1 -> 2
  1513. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node cluster1.verolengo.privatelan[1] - expected state is now member
  1514. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_filter_offer: 1 nodes have been integrated into join-1
  1515. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
  1516. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: join-1: Integration of 1 peers complete: do_dc_join_filter_offer
  1517. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
  1518. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  1519. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_state_transition: All 1 cluster nodes responded to the join offer.
  1520. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crm_timer_start: Started Finalization Timer (I_ELECTION:1800000ms), src=55
  1521. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_finalize: Finializing join-1 for 1 clients
  1522. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crmd_join_phase_log: join-1: cluster2.verolengo.privatelan=none
  1523. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crmd_join_phase_log: join-1: cluster1.verolengo.privatelan=integrated
  1524. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
  1525. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_dc_join_finalize: Requested version <generation_tuple admin_epoch="0" cib-last-written="Wed Jul 2 21:29:37 2014" crm_feature_set="3.0.7" dc-uuid="cluster2.verolengo.privatelan" epoch="23" have-quorum="1" num_updates="19" update-client="crm_resource" update-origin="cluster1.verolengo.privatelan" validate-with="pacemaker-1.2"/>
  1526. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: debug: sync_our_cib: Syncing CIB to all peers
  1527. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/48, version=0.23.19)
  1528. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
  1529. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: check_join_state: join-1: Still waiting on 1 integrated nodes
  1530. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crmd_join_phase_log: join-1: cluster2.verolengo.privatelan=none
  1531. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: crmd_join_phase_log: join-1: cluster1.verolengo.privatelan=integrated
  1532. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: finalize_sync_callback: Notifying 1 clients of join-1 results
  1533. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: finalize_join_for: join-1: ACK'ing join request from cluster1.verolengo.privatelan
  1534. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: crm_update_peer_join: finalize_join_for: Node cluster1.verolengo.privatelan[1] - join-1 phase 2 -> 3
  1535. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: handle_request: Raising I_JOIN_RESULT: join-1
  1536. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: s_crmd_fsa: Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
  1537. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: do_cl_join_finalize_respond: Confirming join join-1: join_ack_nack
  1538. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVM after monitor op complete (interval=0)
  1539. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVMDisk after start op complete (interval=0)
  1540. Jul 02 21:32:35 [16338] cluster1.verolengo.privatelan cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/49, version=0.23.19)
  1541. Jul 02 21:32:35 corosync [CONFDB] lib_init_fn: conn=0x13c4f20
  1542. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
  1543. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
  1544. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1545. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
  1546. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1547. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
  1548. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 43
  1549. Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
  1550. Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
  1551. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
  1552. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
  1553. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1554. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
  1555. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
  1556. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 43
  1557. Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
  1558. Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
  1559. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
  1560. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1561. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1562. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1563. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1564. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1565. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1566. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 43
  1567. Jul 02 21:32:35 corosync [CONFDB] exit_fn for conn=0x13c4f20
  1568. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 43
  1569. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1570. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1571. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1572. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1573. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1574. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1575. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 43
  1576. Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 43
  1577. Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
  1578. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1579. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1580. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1581. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1582. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1583. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1584. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 42
  1585. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1586. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 5
  1587. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1588. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 5
  1589. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 0
  1590. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000005 to fd 42
  1591. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1592. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1593. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1594. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1595. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1596. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1597. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1598. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
  1599. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1600. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 7
  1601. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1602. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 7
  1603. Jul 02 21:32:35 corosync [CMAN ] memb: get_all_members: retlen = 880
  1604. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 2
  1605. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 880
  1606. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000007 to fd 42
  1607. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1608. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 90
  1609. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1610. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 90
  1611. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1612. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 440
  1613. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000090 to fd 42
  1614. Jul 02 21:32:35 corosync [CMAN ] daemon: read 0 bytes from fd 42
  1615. Jul 02 21:32:35 corosync [CMAN ] daemon: Freed 0 queued messages
  1616. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 14431 exited with rc=0
  1617. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource FirewallVMDisk after promote op Timed Out (interval=0)
  1618. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster1 after monitor op complete (interval=0)
  1619. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: create_operation_update: build_active_RAs: Updating resource ilocluster2 after start op complete (interval=0)
  1620. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: info: stonith_action_create: Initiating action metadata for agent fence_ilo2 (target=(null))
  1621. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: forking
  1622. Jul 02 21:32:35 [16343] cluster1.verolengo.privatelan crmd: debug: internal_stonith_action_execute: sending args
  1623. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1624. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 91
  1625. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1626. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 91
  1627. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1628. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 24
  1629. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000091 to fd 42
  1630. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1631. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 9
  1632. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1633. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 9
  1634. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1635. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 16
  1636. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000009 to fd 42
  1637. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
  1638. Jul 02 21:32:35 corosync [CMAN ] daemon: client command is 92
  1639. Jul 02 21:32:35 corosync [CMAN ] daemon: About to process command
  1640. Jul 02 21:32:35 corosync [CMAN ] memb: command to process is 92
  1641. Jul 02 21:32:35 corosync [CMAN ] memb: get_extrainfo: allocated new buffer
  1642. Jul 02 21:32:35 corosync [CMAN ] memb: command return code is 0
  1643. Jul 02 21:32:35 corosync [CMAN ] daemon: Returning command data. length = 576
  1644. Jul 02 21:32:35 corosync [CMAN ] daemon: sending reply 40000092 to fd 42
  1645. Jul 02 21:32:35 corosync [CMAN ] daemon: read 20 bytes from fd 42
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement