Advertisement
amartin

node2 heartbeat start

Mar 30th, 2012
363
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 22.01 KB | None | 0 0
  1.  
  2. node2 heartbeat: [10682]: info: No log entry found in ha.cf -- use logd
  3. node2 heartbeat: [10682]: info: Enabling logging daemon
  4. node2 heartbeat: [10682]: info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
  5. node2 heartbeat: [10682]: info: **************************
  6. node2 heartbeat: [10682]: info: Configuration validated. Starting heartbeat 3.0.5
  7. node2 heartbeat: [10683]: info: heartbeat: version 3.0.5
  8. node2 heartbeat: [10683]: info: Heartbeat generation: 1328556158
  9. node2 heartbeat: [10683]: info: glib: UDP multicast heartbeat started for group 239.0.0.43 port 694 interface br0 (ttl=1 loop=0)
  10. node2 heartbeat: [10683]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface br1
  11. node2 heartbeat: [10683]: info: glib: UDP Broadcast heartbeat closed on port 694 interface br1 - Status: 1
  12. node2 heartbeat: [10683]: info: Local status now set to: 'up'
  13. node2 heartbeat: [10683]: info: Link node2:br1 up.
  14. node2 heartbeat: [10683]: info: Link node1:br0 up.
  15. node2 heartbeat: [10683]: info: Link node1:br1 up.
  16. node2 heartbeat: [10683]: info: Link quorumnode:br0 up.
  17. node2 heartbeat: [10683]: info: Status update for node quorumnode: status active
  18. node2 heartbeat: [10683]: info: Comm_now_up(): updating status to active
  19. node2 heartbeat: [10683]: info: Local status now set to: 'active'
  20. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/ccm" (113,122)
  21. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/cib" (113,122)
  22. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,0)
  23. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/stonithd" (0,0)
  24. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/attrd" (113,122)
  25. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/crmd" (113,122)
  26. node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/dopd" (113,122)
  27. node2 heartbeat: [10701]: info: Starting "/usr/lib/heartbeat/cib" as uid 113 gid 122 (pid 10701)
  28. node2 heartbeat: [10703]: info: Starting "/usr/lib/heartbeat/stonithd" as uid 0 gid 0 (pid 10703)
  29. node2 heartbeat: [10706]: info: Starting "/usr/lib/heartbeat/dopd" as uid 113 gid 122 (pid 10706)
  30. node2 heartbeat: [10704]: info: Starting "/usr/lib/heartbeat/attrd" as uid 113 gid 122 (pid 10704)
  31. node2 heartbeat: [10702]: info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0 gid 0 (pid 10702)
  32. node2 heartbeat: [10700]: info: Starting "/usr/lib/heartbeat/ccm" as uid 113 gid 122 (pid 10700)
  33. node2 heartbeat: [10705]: info: Starting "/usr/lib/heartbeat/crmd" as uid 113 gid 122 (pid 10705)
  34. node2 /usr/lib/heartbeat/dopd: [10706]: debug: PID=10706
  35. node2 /usr/lib/heartbeat/dopd: [10706]: debug: Signing in with heartbeat
  36. node2 attrd: [10704]: info: Invoked: /usr/lib/heartbeat/attrd
  37. node2 /usr/lib/heartbeat/dopd: [10706]: debug: [We are node2]
  38. node2 attrd: [10704]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  39. node2 cib: [10701]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
  40. node2 cib: [10701]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
  41. node2 crmd: [10705]: info: Invoked: /usr/lib/heartbeat/crmd
  42. node2 /usr/lib/heartbeat/dopd: [10706]: debug: Setting message filter mode
  43. node2 /usr/lib/heartbeat/dopd: [10706]: debug: Setting message signal
  44. node2 crmd: [10705]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
  45. node2 /usr/lib/heartbeat/dopd: [10706]: debug: Waiting for messages...
  46. node2 crmd: [10705]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
  47. node2 attrd: [10704]: notice: main: Starting mainloop...
  48. node2 crmd: [10705]: info: crmd_init: Starting crmd
  49. node2 ccm: [10700]: info: Hostname: node2
  50. node2 stonith-ng: [10703]: info: Invoked: /usr/lib/heartbeat/stonithd
  51. node2 lrmd: [10702]: info: enabling coredumps
  52. node2 cib: [10701]: info: validate_with_relaxng: Creating RNG parser context
  53. node2 lrmd: [10702]: WARN: Core dumps could be lost if multiple dumps occur.
  54. node2 lrmd: [10702]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
  55. node2 lrmd: [10702]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
  56. node2 lrmd: [10702]: info: Started.
  57. node2 stonith-ng: [10703]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
  58. node2 heartbeat: [10683]: info: the send queue length from heartbeat to client attrd is set to 1024
  59. node2 stonith-ng: [10703]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  60. node2 stonith-ng: [10703]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  61. node2 heartbeat: [10683]: info: the send queue length from heartbeat to client ccm is set to 1024
  62. node2 stonith-ng: [10703]: info: register_heartbeat_conn: Hostname: node2
  63. node2 stonith-ng: [10703]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
  64. node2 stonith-ng: [10703]: info: main: Starting stonith-ng mainloop
  65. node2 cib: [10701]: info: startCib: CIB Initialization completed successfully
  66. node2 cib: [10701]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  67. node2 cib: [10701]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  68. node2 heartbeat: [10683]: info: the send queue length from heartbeat to client stonith-ng is set to 1024
  69. node2 cib: [10701]: info: register_heartbeat_conn: Hostname: node2
  70. node2 cib: [10701]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
  71. node2 cib: [10701]: info: ccm_connect: Registering with CCM...
  72. node2 cib: [10701]: WARN: ccm_connect: CCM Activation failed
  73. node2 cib: [10701]: WARN: ccm_connect: CCM Connection failed 1 times (30 max)
  74. node2 heartbeat: [10683]: info: the send queue length from heartbeat to client cib is set to 1024
  75. node2 crmd: [10705]: info: do_cib_control: Could not connect to the CIB service: connection failed
  76. node2 crmd: [10705]: WARN: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
  77. node2 crmd: [10705]: info: crmd_init: Starting crmd's mainloop
  78. node2 heartbeat: [10683]: info: Status update for node node1: status active
  79. node2 crmd: [10705]: info: crm_timer_popped: Wait Timer (I_NULL) just popped (2000ms)
  80. node2 cib: [10701]: info: ccm_connect: Registering with CCM...
  81. node2 cib: [10701]: WARN: ccm_connect: CCM Activation failed
  82. node2 cib: [10701]: WARN: ccm_connect: CCM Connection failed 2 times (30 max)
  83. node2 crmd: [10705]: info: do_cib_control: Could not connect to the CIB service: connection failed
  84. node2 crmd: [10705]: WARN: do_cib_control: Couldn't complete CIB registration 2 times... pause and retry
  85. node2 crmd: [10705]: info: crm_timer_popped: Wait Timer (I_NULL) just popped (2000ms)
  86. node2 cib: [10701]: info: ccm_connect: Registering with CCM...
  87. node2 cib: [10701]: info: cib_init: Requesting the list of configured nodes
  88. node2 cib: [10701]: info: cib_init: Starting cib mainloop
  89. node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node2/cib now has status [join]
  90. node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as node2
  91. node2 cib: [10701]: info: crm_update_peer_proc: node2.cib is now online
  92. node2 cib: [10701]: WARN: cib_peer_callback: Discarding cib_sync_one message (16f) from quorumnode: not in our membership
  93. node2 cib: [10701]: WARN: cib_peer_callback: Discarding cib_apply_diff message (a778) from node1: not in our membership
  94. node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node2/cib now has status [online]
  95. node2 crmd: [10705]: info: do_cib_control: CIB connection established
  96. node2 crmd: [10705]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  97. node2 crmd: [10705]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  98. node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node1/cib now has status [online]
  99. node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as node1
  100. node2 crmd: [10705]: info: register_heartbeat_conn: Hostname: node2
  101. node2 cib: [10701]: info: crm_update_peer_proc: node1.cib is now online
  102. node2 crmd: [10705]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
  103. node2 cib: [10701]: info: cib_client_status_callback: Status update: Client quorumnode/cib now has status [online]
  104. node2 heartbeat: [10683]: info: the send queue length from heartbeat to client crmd is set to 1024
  105. node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as quorumnode
  106. node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.cib is now online
  107. node2 crmd: [10705]: info: do_ha_control: Connected to the cluster
  108. node2 crmd: [10705]: info: do_ccm_control: CCM connection established... waiting for first callback
  109. node2 crmd: [10705]: info: do_started: Delaying start, no membership data (0000000000100000)
  110. node2 crmd: [10705]: info: config_query_callback: Shutdown escalation occurs after: 300000ms
  111. node2 crmd: [10705]: info: config_query_callback: Checking for expired actions every 300000ms
  112. node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
  113. node2 cib: [10701]: info: cib_process_diff: Diff 11.360.211 -> 11.360.212 not applied to 11.360.0: current "num_updates" is less than required
  114. node2 cib: [10701]: info: cib_server_process_diff: Requesting re-sync from peer
  115. node2 cib: [10701]: notice: cib_server_process_diff: Not applying diff 11.360.212 -> 11.360.213 (sync in progress)
  116. node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as node2
  117. node2 crmd: [10705]: info: ais_status_callback: status: node2 is now unknown
  118. node2 crmd: [10705]: info: crm_update_peer_proc: node2.crmd is now online
  119. node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client node2/crmd now has status [online] (DC=<null>)
  120. node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
  121. node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
  122. node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as node1
  123. node2 crmd: [10705]: info: ais_status_callback: status: node1 is now unknown
  124. node2 crmd: [10705]: info: crm_update_peer_proc: node1.crmd is now online
  125. node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=<null>)
  126. node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client quorumnode/crmd now has status [offline] (DC=false)
  127. node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as quorumnode
  128. node2 crmd: [10705]: info: ais_status_callback: status: quorumnode is now unknown
  129. node2 crmd: [10705]: info: do_started: Delaying start, no membership data (0000000000100000)
  130. node2 cib: [10701]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 11.360.213 from node1
  131. node2 crmd: [10705]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  132. node2 crmd: [10705]: info: mem_handle_event: instance=29, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
  133. node2 crmd: [10705]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=29)
  134. node2 crmd: [10705]: info: ccm_event_detail: NEW MEMBERSHIP: trans=29, nodes=3, new=3, lost=0 n_idx=0, new_idx=0, old_idx=6
  135. node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=24]
  136. node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=27]
  137. node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=29]
  138. node2 crmd: [10705]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=24]
  139. node2 crmd: [10705]: info: ccm_event_detail: #011NEW: quorumnode [nodeid=0, born=27]
  140. node2 crmd: [10705]: info: ccm_event_detail: #011NEW: node2 [nodeid=2, born=29]
  141. node2 crmd: [10705]: info: crm_get_peer: Node node1 now has id: 1
  142. node2 crmd: [10705]: info: ais_status_callback: status: node1 is now member (was unknown)
  143. node2 crmd: [10705]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=24 seen=29 proc=00000000000000000000000000000200
  144. node2 crmd: [10705]: info: crm_update_peer_proc: node1.ais is now online
  145. node2 crmd: [10705]: info: ais_status_callback: status: quorumnode is now member (was unknown)
  146. node2 crmd: [10705]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=27 seen=29 proc=00000000000000000000000000000000
  147. node2 crmd: [10705]: info: crm_update_peer_proc: quorumnode.ais is now online
  148. node2 crmd: [10705]: info: crm_update_peer_proc: quorumnode.crmd is now online
  149. node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client quorumnode/crmd now has status [online] (DC=<null>)
  150. node2 crmd: [10705]: info: crm_get_peer: Node node2 now has id: 2
  151. node2 crmd: [10705]: info: ais_status_callback: status: node2 is now member (was unknown)
  152. node2 crmd: [10705]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=29 seen=29 proc=00000000000000000000000000000200
  153. node2 crmd: [10705]: info: crm_update_peer_proc: node2.ais is now online
  154. node2 crmd: [10705]: info: do_started: The local CRM is operational
  155. node2 crmd: [10705]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  156. node2 cib: [10701]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  157. node2 cib: [10701]: info: mem_handle_event: instance=29, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
  158. node2 cib: [10701]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=29)
  159. node2 cib: [10701]: info: crm_get_peer: Node node1 now has id: 1
  160. node2 cib: [10701]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=24 seen=29 proc=00000000000000000000000000000100
  161. node2 cib: [10701]: info: crm_update_peer_proc: node1.ais is now online
  162. node2 cib: [10701]: info: crm_update_peer_proc: node1.crmd is now online
  163. node2 cib: [10701]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=27 seen=29 proc=00000000000000000000000000000100
  164. node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.ais is now online
  165. node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.crmd is now online
  166. node2 cib: [10701]: info: crm_get_peer: Node node2 now has id: 2
  167. node2 cib: [10701]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=29 seen=29 proc=00000000000000000000000000000100
  168. node2 cib: [10701]: info: crm_update_peer_proc: node2.ais is now online
  169. node2 cib: [10701]: info: crm_update_peer_proc: node2.crmd is now online
  170. node2 crmd: [10705]: info: update_dc: Set DC to node1 (3.0.5)
  171. node2 crmd: [10705]: info: te_connect_stonith: Attempting connection to fencing daemon...
  172. node2 crmd: [10705]: info: te_connect_stonith: Connected
  173. node2 attrd: [10704]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  174. node2 crmd: [10705]: info: update_attrd: Connecting to attrd...
  175. node2 crmd: [10705]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  176. node2 crmd: [10705]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/transient_attributes": ok (rc=0)
  177. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=12:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_vmstore:1_monitor_0 )
  178. node2 lrmd: [10702]: info: rsc:p_drbd_vmstore:1 probe[2] (pid 11065)
  179. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=13:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_mount1:1_monitor_0 )
  180. node2 lrmd: [10702]: info: rsc:p_drbd_mount1:1 probe[3] (pid 11066)
  181. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=14:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_mount2:1_monitor_0 )
  182. node2 lrmd: [10702]: info: rsc:p_drbd_mount2:1 probe[4] (pid 11069)
  183. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=15:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_fs_vmstore_monitor_0 )
  184. node2 lrmd: [10702]: info: rsc:p_fs_vmstore probe[5] (pid 11070)
  185. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=16:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_vm_webapps_monitor_0 )
  186. node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  187. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=17:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_libvirt-bin:0_monitor_0 )
  188. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=18:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_sysadmin_notify:0_monitor_0 )
  189. node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  190. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=19:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=stonith-node1_monitor_0 )
  191. node2 lrmd: [10702]: info: rsc:stonith-node1 probe[9] (pid 11077)
  192. node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  193. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=20:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=stonith-node2_monitor_0 )
  194. node2 lrmd: [10702]: info: rsc:stonith-node2 probe[10] (pid 11079)
  195. node2 stonith-ng: [10703]: notice: stonith_device_action: Device stonith-node1 not found
  196. node2 stonith-ng: [10703]: info: stonith_command: Processed st_execute from lrmd: rc=-12
  197. node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=21:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_ping:0_monitor_0 )
  198. node2 lrmd: [10702]: info: operation monitor[9] on stonith-node1 for client 10705: pid 11077 exited with return code 7
  199. node2 stonith-ng: [10703]: notice: stonith_device_action: Device stonith-node2 not found
  200. node2 stonith-ng: [10703]: info: stonith_command: Processed st_execute from lrmd: rc=-12
  201. node2 crmd: [10705]: info: process_lrm_event: LRM operation stonith-node1_monitor_0 (call=9, rc=7, cib-update=8, confirmed=true) not running
  202. node2 lrmd: [10702]: info: operation monitor[10] on stonith-node2 for client 10705: pid 11079 exited with return code 7
  203. node2 crmd: [10705]: info: process_lrm_event: LRM operation stonith-node2_monitor_0 (call=10, rc=7, cib-update=9, confirmed=true) not running
  204. node2 lrmd: [10702]: info: operation monitor[5] on p_fs_vmstore for client 10705: pid 11070 exited with return code 7
  205. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_fs_vmstore_monitor_0 (call=5, rc=7, cib-update=10, confirmed=true) not running
  206. node2 crm_attribute: [11176]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_vmstore:1 -l reboot -D
  207. node2 crm_attribute: [11180]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_mount2:1 -l reboot -D
  208. node2 crm_attribute: [11182]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_mount1:1 -l reboot -D
  209. node2 lrmd: [10702]: info: operation monitor[2] on p_drbd_vmstore:1 for client 10705: pid 11065 exited with return code 7
  210. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=2, rc=7, cib-update=11, confirmed=true) not running
  211. node2 lrmd: [10702]: info: operation monitor[4] on p_drbd_mount2:1 for client 10705: pid 11069 exited with return code 7
  212. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_mount2:1_monitor_0 (call=4, rc=7, cib-update=12, confirmed=true) not running
  213. node2 lrmd: [10702]: info: operation monitor[3] on p_drbd_mount1:1 for client 10705: pid 11066 exited with return code 7
  214. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_mount1:1_monitor_0 (call=3, rc=7, cib-update=13, confirmed=true) not running
  215. node2 lrmd: [10702]: info: rsc:p_vm_webapps probe[6] (pid 11183)
  216. node2 lrmd: [10702]: info: rsc:p_libvirt-bin:0 probe[7] (pid 11184)
  217. node2 lrmd: [10702]: info: rsc:p_sysadmin_notify:0 probe[8] (pid 11185)
  218. node2 lrmd: [10702]: info: RA output: (p_libvirt-bin:0:probe:stderr) process 11184: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.#012Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
  219. node2 lrmd: [10702]: info: operation monitor[7] on p_libvirt-bin:0 for client 10705: pid 11184 exited with return code 7
  220. node2 lrmd: [10702]: info: rsc:p_ping:0 probe[11] (pid 11189)
  221. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_libvirt-bin:0_monitor_0 (call=7, rc=7, cib-update=14, confirmed=true) not running
  222. node2 lrmd: [10702]: info: RA output: (p_vm_webapps:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock': Connection refused#012error: failed to connect to the hypervisor
  223. node2 lrmd: [10702]: info: operation monitor[8] on p_sysadmin_notify:0 for client 10705: pid 11185 exited with return code 7
  224. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_sysadmin_notify:0_monitor_0 (call=8, rc=7, cib-update=15, confirmed=true) not running
  225. node2 lrmd: [10702]: info: operation monitor[11] on p_ping:0 for client 10705: pid 11189 exited with return code 7
  226. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_ping:0_monitor_0 (call=11, rc=7, cib-update=16, confirmed=true) not running
  227. node2 VirtualDomain[11183]: [11211]: INFO: Configuration file /mnt/storage/vmstore/config/webapps.xml not readable during probe.
  228. node2 lrmd: [10702]: info: operation monitor[6] on p_vm_webapps for client 10705: pid 11183 exited with return code 7
  229. node2 crmd: [10705]: info: process_lrm_event: LRM operation p_vm_webapps_monitor_0 (call=6, rc=7, cib-update=17, confirmed=true) not running
  230. node2 attrd: [10704]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  231. node2 attrd: [10704]: notice: attrd_perform_update: Sent update 9: probe_complete=true
  232. node2 cib: [10701]: info: cib_stats: Processed 134 operations (522.00us average, 0% utilization) in the last 10min
  233. node2 cib: [10701]: info: cib_stats: Processed 30 operations (1333.00us average, 0% utilization) in the last 10min
  234. node2 cib: [10701]: info: cib_stats: Processed 30 operations (1000.00us average, 0% utilization) in the last 10min
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement