Advertisement
Guest User

europaNode

a guest
Jan 31st, 2020
1,203
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 19.32 KB | None | 0 0
  1. [root@europa ~]# service vdsmd status -l
  2. Redirecting to /bin/systemctl status -l vdsmd.service
  3. ● vdsmd.service - Virtual Desktop Server Manager
  4. Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
  5. Active: active (running) since Fri 2020-01-31 11:07:52 CET; 12h ago
  6. Process: 30293 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
  7. Main PID: 30397 (vdsmd)
  8. Tasks: 46
  9. CGroup: /system.slice/vdsmd.service
  10. ├─30397 /usr/bin/python2 /usr/share/vdsm/vdsmd
  11. └─30547 /usr/libexec/ioprocess --read-pipe-fd 64 --write-pipe-fd 63 --max-threads 10 --max-queued-requests 10
  12.  
  13. Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  14. Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  15. Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  16. Jan 31 23:03:36 europa.planet.bn vdsm[30397]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  17. Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.ovirt-guest-agent.0 already removed
  18. Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing network: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  19. Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  20. Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  21. Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  22. Jan 31 23:14:10 europa.planet.bn vdsm[30397]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  23. ##################################################################################################################
  24. [root@europa ~]# service libvirtd status -l
  25. Redirecting to /bin/systemctl status -l libvirtd.service
  26. ● libvirtd.service - Virtualization daemon
  27. Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  28. Drop-In: /etc/systemd/system/libvirtd.service.d
  29. └─unlimited-core.conf
  30. Active: active (running) since Fri 2020-01-31 10:08:43 CET; 13h ago
  31. Docs: man:libvirtd(8)
  32. https://libvirt.org
  33. Main PID: 4540 (libvirtd)
  34. Tasks: 18 (limit: 32768)
  35. CGroup: /system.slice/libvirtd.service
  36. └─4540 /usr/sbin/libvirtd --listen
  37.  
  38. Jan 31 22:52:52 europa.planet.bn libvirtd[4540]: 2020-01-31T21:52:52.891410Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  39. Jan 31 22:52:52 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  40. Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31 22:03:38.919+0000: 4540: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
  41. Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31 22:03:38.920+0000: 4540: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T22:03:38.848199Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
  42. Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31T22:03:38.908211Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  43. Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  44. Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31 22:14:13.645+0000: 4540: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
  45. Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31 22:14:13.646+0000: 4540: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T22:14:13.565981Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
  46. Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31T22:14:13.629819Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  47. Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  48. ##################################################################################################################
  49. [root@europa ~]# service ovirt-ha-agent status -l
  50. Redirecting to /bin/systemctl status -l ovirt-ha-agent.service
  51. ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
  52. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
  53. Active: active (running) since Fri 2020-01-31 11:04:05 CET; 12h ago
  54. Main PID: 27892 (ovirt-ha-agent)
  55. Tasks: 4
  56. CGroup: /system.slice/ovirt-ha-agent.service
  57. └─27892 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
  58.  
  59. Jan 31 11:04:05 europa.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
  60. ##################################################################################################################
  61. [root@europa ~]# service ovirt-ha-broker status -l
  62. Redirecting to /bin/systemctl status -l ovirt-ha-broker.service
  63. ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker
  64. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled)
  65. Active: active (running) since Fri 2020-01-31 10:34:26 CET; 12h ago
  66. Main PID: 16065 (ovirt-ha-broker)
  67. Tasks: 12
  68. CGroup: /system.slice/ovirt-ha-broker.service
  69. └─16065 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
  70.  
  71. Jan 31 10:34:26 europa.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
  72. Jan 31 10:38:48 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to start monitoring domain (sd_uuid=fb9878a0-f641-4eac-a4c6-cea21a2502c5, host_id=2): timeout during domain acquisition
  73. Jan 31 10:38:48 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.Action.start_domain_monitor ERROR Error in RPC call: Failed to start monitoring domain (sd_uuid=fb9878a0-f641-4eac-a4c6-cea21a2502c5, host_id=2): timeout during domain acquisition
  74. Jan 31 10:49:43 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor mgmt-bridge, args {'use_ssl': 'true', 'bridge_name': 'ovirtmgmt', 'address': '0'}
  75. Traceback (most recent call last):
  76. File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker
  77. self.action(self._options)
  78. File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/mgmt_bridge.py", line 47, in action
  79. stats = cli.Host.getStats()
  80. File "/usr/lib/python2.7/site-packages/vdsm/client.py", line 294, in _call
  81. raise TimeoutError(method, kwargs, timeout)
  82. TimeoutError: Request Host.getStats with args {} timed out after 900 seconds
  83. Jan 31 10:49:44 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor mem-free, args {'use_ssl': 'true', 'address': '0'}
  84. Traceback (most recent call last):
  85. File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker
  86. self.action(self._options)
  87. File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/mem_free.py", line 43, in action
  88. stats = cli.Host.getStats()
  89. File "/usr/lib/python2.7/site-packages/vdsm/client.py", line 294, in _call
  90. raise TimeoutError(method, kwargs, timeout)
  91. TimeoutError: Request Host.getStats with args {} timed out after 900 seconds
  92. Jan 31 11:07:57 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
  93. Jan 31 11:08:07 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
  94. ##################################################################################################################
  95. [root@europa ~]# service supervdsmd status -l
  96. Redirecting to /bin/systemctl status -l supervdsmd.service
  97. ● supervdsmd.service - Auxiliary vdsm service for running helper functions as root
  98. Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor preset: enabled)
  99. Active: active (running) since Fri 2020-01-31 10:08:43 CET; 13h ago
  100. Main PID: 4719 (supervdsmd)
  101. Tasks: 12
  102. CGroup: /system.slice/supervdsmd.service
  103. └─4719 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
  104.  
  105. Jan 31 10:08:43 europa.planet.bn systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
  106. Jan 31 10:08:43 europa.planet.bn supervdsmd[4719]: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
  107. ##################################################################################################################
  108. [root@europa ~]# service glusterd status -l
  109. Redirecting to /bin/systemctl status -l glusterd.service
  110. ● glusterd.service - GlusterFS, a clustered file-system server
  111. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  112. Drop-In: /etc/systemd/system/glusterd.service.d
  113. └─99-cpu.conf
  114. Active: active (running) since Fri 2020-01-31 10:08:46 CET; 13h ago
  115. Docs: man:glusterd(8)
  116. Process: 4436 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  117. Main PID: 4525 (glusterd)
  118. CGroup: /glusterfs.slice/glusterd.service
  119. ├─ 4525 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  120. ├─15868 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id data.germany.planet.bn.gluster_bricks-data-data -p /var/run/gluster/vols/data/germany.planet.bn-gluster_bricks-data-data.pid -S /var/run/gluster/9413c67344baa043.socket --brick-name /gluster_bricks/data/data -l /var/log/glusterfs/bricks/gluster_bricks-data-data.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49152 --xlator-option data-server.listen-port=49152
  121. ├─15880 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id engine.germany.planet.bn.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/germany.planet.bn-gluster_bricks-engine-engine.pid -S /var/run/gluster/db163ce21a8faee6.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153
  122. ├─15892 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id vmstore.germany.planet.bn.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/germany.planet.bn-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/9df5d3dc57f1e3f8.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_bricks-vmstore-vmstore.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49154 --xlator-option vmstore-server.listen-port=49154
  123. └─17926 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/eed6216d151e3b54.socket --xlator-option *replicate*.node-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name glustershd --client-pid=-6
  124.  
  125. Jan 31 10:08:46 europa.planet.bn systemd[1]: Started GlusterFS, a clustered file-system server.
  126. Jan 31 10:08:46 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:46.889947] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
  127. Jan 31 10:08:47 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:47.812224] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
  128. Jan 31 10:08:48 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:48.182653] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
  129. Jan 31 10:32:02 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:02.516960] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume data. Stopping local bricks.
  130. Jan 31 10:32:03 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:03.517693] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume engine. Stopping local bricks.
  131. Jan 31 10:32:04 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:04.518347] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume vmstore. Stopping local bricks.
  132. Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.168824] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
  133. Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.515186] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
  134. Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.801795] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
  135. ##################################################################################################################
  136. [root@europa ~]# hosted-engine --vm-status
  137.  
  138. --== Host america.planet.bn (id: 1) status ==--
  139.  
  140. conf_on_shared_storage : True
  141. Status up-to-date : True
  142. Hostname : america.planet.bn
  143. Host ID : 1
  144. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  145. Score : 0
  146. stopped : False
  147. Local maintenance : False
  148. crc32 : 59e77475
  149. local_conf_timestamp : 30629
  150. Host timestamp : 30629
  151. Extra metadata (valid at timestamp):
  152. metadata_parse_version=1
  153. metadata_feature_version=1
  154. timestamp=30629 (Fri Jan 31 23:17:39 2020)
  155. host-id=1
  156. score=0
  157. vm_conf_refresh_time=30629 (Fri Jan 31 23:17:39 2020)
  158. conf_on_shared_storage=True
  159. maintenance=False
  160. state=EngineUnexpectedlyDown
  161. stopped=False
  162. timeout=Thu Jan 1 09:36:39 1970
  163.  
  164.  
  165. --== Host europa.planet.bn (id: 2) status ==--
  166.  
  167. conf_on_shared_storage : True
  168. Status up-to-date : True
  169. Hostname : europa.planet.bn
  170. Host ID : 2
  171. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  172. Score : 0
  173. stopped : False
  174. Local maintenance : False
  175. crc32 : 2d54ca79
  176. local_conf_timestamp : 47384
  177. Host timestamp : 47384
  178. Extra metadata (valid at timestamp):
  179. metadata_parse_version=1
  180. metadata_feature_version=1
  181. timestamp=47384 (Fri Jan 31 23:17:41 2020)
  182. host-id=2
  183. score=0
  184. vm_conf_refresh_time=47384 (Fri Jan 31 23:17:41 2020)
  185. conf_on_shared_storage=True
  186. maintenance=False
  187. state=EngineUnexpectedlyDown
  188. stopped=False
  189. timeout=Thu Jan 1 14:15:53 1970
  190.  
  191.  
  192. --== Host asia.planet.bn (id: 3) status ==--
  193.  
  194. conf_on_shared_storage : True
  195. Status up-to-date : True
  196. Hostname : asia.planet.bn
  197. Host ID : 3
  198. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Paused"}
  199. Score : 3400
  200. stopped : False
  201. Local maintenance : False
  202. crc32 : 8c5502c2
  203. local_conf_timestamp : 45845
  204. Host timestamp : 45845
  205. Extra metadata (valid at timestamp):
  206. metadata_parse_version=1
  207. metadata_feature_version=1
  208. timestamp=45845 (Fri Jan 31 23:17:37 2020)
  209. host-id=3
  210. score=3400
  211. vm_conf_refresh_time=45845 (Fri Jan 31 23:17:37 2020)
  212. conf_on_shared_storage=True
  213. maintenance=False
  214. state=EngineStarting
  215. stopped=False
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement