Advertisement
Guest User

Americanode

a guest
Jan 31st, 2020
257
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 14.93 KB | None | 0 0
  1. [root@america ~]# service vdsmd status -l
  2. Redirecting to /bin/systemctl status -l vdsmd.service
  3. ● vdsmd.service - Virtual Desktop Server Manager
  4. Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
  5. Active: active (running) since Fri 2020-01-31 14:48:03 CET; 8h ago
  6. Process: 3916 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
  7. Main PID: 4285 (vdsmd)
  8. Tasks: 46
  9. CGroup: /system.slice/vdsmd.service
  10. ├─4285 /usr/bin/python2 /usr/share/vdsm/vdsmd
  11. └─4837 /usr/libexec/ioprocess --read-pipe-fd 70 --write-pipe-fd 69 --max-threads 10 --max-queued-requests 10
  12.  
  13. Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  14. Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  15. Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  16. Jan 31 22:31:41 america.planet.bn vdsm[4285]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  17. Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.ovirt-guest-agent.0 already removed
  18. Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing network: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  19. Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  20. Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  21. Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  22. Jan 31 22:42:18 america.planet.bn vdsm[4285]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  23. ##################################################################################################################
  24. [root@america ~]# service libvirtd status -l
  25. Redirecting to /bin/systemctl status -l libvirtd.service
  26. ● libvirtd.service - Virtualization daemon
  27. Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  28. Drop-In: /etc/systemd/system/libvirtd.service.d
  29. └─unlimited-core.conf
  30. Active: active (running) since Fri 2020-01-31 14:47:56 CET; 8h ago
  31. Docs: man:libvirtd(8)
  32. https://libvirt.org
  33. Main PID: 3691 (libvirtd)
  34. Tasks: 18 (limit: 32768)
  35. CGroup: /system.slice/libvirtd.service
  36. └─3691 /usr/sbin/libvirtd --listen
  37.  
  38. Jan 31 22:21:06 america.planet.bn libvirtd[3691]: 2020-01-31T21:21:06.965197Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  39. Jan 31 22:21:06 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  40. Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31 21:31:45.769+0000: 3691: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
  41. Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31 21:31:45.770+0000: 3691: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T21:31:45.696984Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
  42. Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31T21:31:45.757960Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  43. Jan 31 22:31:45 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  44. Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31 21:42:22.823+0000: 3691: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
  45. Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31 21:42:22.824+0000: 3691: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T21:42:22.753658Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
  46. Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31T21:42:22.815861Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
  47. Jan 31 22:42:22 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
  48. ##################################################################################################################
  49. [root@america ~]# service ovirt-ha-agent status -l
  50. Redirecting to /bin/systemctl status -l ovirt-ha-agent.service
  51. ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
  52. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
  53. Active: active (running) since Fri 2020-01-31 14:48:16 CET; 8h ago
  54. Main PID: 4876 (ovirt-ha-agent)
  55. Tasks: 2
  56. CGroup: /system.slice/ovirt-ha-agent.service
  57. └─4876 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
  58.  
  59. Jan 31 14:48:16 america.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
  60. ##################################################################################################################
  61. [root@america ~]# service ovirt-ha-broker status -l
  62. Redirecting to /bin/systemctl status -l ovirt-ha-broker.service
  63. ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker
  64. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled)
  65. Active: active (running) since Fri 2020-01-31 14:47:37 CET; 8h ago
  66. Main PID: 2309 (ovirt-ha-broker)
  67. Tasks: 11
  68. CGroup: /system.slice/ovirt-ha-broker.service
  69. └─2309 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
  70.  
  71. Jan 31 14:47:37 america.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
  72. Jan 31 14:48:22 america.planet.bn ovirt-ha-broker[2309]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
  73. ##################################################################################################################
  74. [root@america ~]# service supervdsmd status -l
  75. Redirecting to /bin/systemctl status -l supervdsmd.service
  76. ● supervdsmd.service - Auxiliary vdsm service for running helper functions as root
  77. Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor preset: enabled)
  78. Active: active (running) since Fri 2020-01-31 14:47:56 CET; 8h ago
  79. Main PID: 3748 (supervdsmd)
  80. Tasks: 12
  81. CGroup: /system.slice/supervdsmd.service
  82. └─3748 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
  83.  
  84. Jan 31 14:47:56 america.planet.bn systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
  85. Jan 31 14:47:56 america.planet.bn supervdsmd[3748]: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
  86. ##################################################################################################################
  87. [root@america ~]# service glusterd status -l
  88. Redirecting to /bin/systemctl status -l glusterd.service
  89. ● glusterd.service - GlusterFS, a clustered file-system server
  90. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  91. Drop-In: /etc/systemd/system/glusterd.service.d
  92. └─99-cpu.conf
  93. Active: active (running) since Fri 2020-01-31 14:48:01 CET; 8h ago
  94. Docs: man:glusterd(8)
  95. Process: 3620 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  96. Main PID: 3667 (glusterd)
  97. CGroup: /glusterfs.slice/glusterd.service
  98. ├─3667 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  99. ├─4096 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id data.kansas.planet.bn.gluster_bricks-data-data -p /var/run/gluster/vols/data/kansas.planet.bn-gluster_bricks-data-data.pid -S /var/run/gluster/033f7e831d979362.socket --brick-name /gluster_bricks/data/data -l /var/log/glusterfs/bricks/gluster_bricks-data-data.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49152 --xlator-option data-server.listen-port=49152
  100. ├─4118 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id engine.kansas.planet.bn.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/kansas.planet.bn-gluster_bricks-engine-engine.pid -S /var/run/gluster/e0af49b6cf29e032.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153
  101. ├─4303 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id vmstore.kansas.planet.bn.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/kansas.planet.bn-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/c7a798189ab9dfcb.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_bricks-vmstore-vmstore.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49154 --xlator-option vmstore-server.listen-port=49154
  102. └─4653 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8c27e01e5551fb6.socket --xlator-option *replicate*.node-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name glustershd --client-pid=-6
  103.  
  104. Jan 31 14:47:55 america.planet.bn systemd[1]: Starting GlusterFS, a clustered file-system server...
  105. Jan 31 14:48:01 america.planet.bn systemd[1]: Started GlusterFS, a clustered file-system server.
  106. Jan 31 14:48:02 america.planet.bn glusterd[3667]: [2020-01-31 13:48:02.074045] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
  107. Jan 31 14:48:03 america.planet.bn glusterd[3667]: [2020-01-31 13:48:03.036471] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
  108. Jan 31 14:48:03 america.planet.bn glusterd[3667]: [2020-01-31 13:48:03.894293] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
  109. ##################################################################################################################
  110. [root@america ~]# hosted-engine --vm-status
  111. --== Host america.planet.bn (id: 1) status ==--
  112.  
  113. conf_on_shared_storage : True
  114. Status up-to-date : True
  115. Hostname : america.planet.bn
  116. Host ID : 1
  117. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  118. Score : 0
  119. stopped : False
  120. Local maintenance : False
  121. crc32 : 7a6c1805
  122. local_conf_timestamp : 29626
  123. Host timestamp : 29626
  124. Extra metadata (valid at timestamp):
  125. metadata_parse_version=1
  126. metadata_feature_version=1
  127. timestamp=29626 (Fri Jan 31 23:00:56 2020)
  128. host-id=1
  129. score=0
  130. vm_conf_refresh_time=29626 (Fri Jan 31 23:00:56 2020)
  131. conf_on_shared_storage=True
  132. maintenance=False
  133. state=EngineUnexpectedlyDown
  134. stopped=False
  135. timeout=Thu Jan 1 09:15:24 1970
  136.  
  137.  
  138. --== Host europa.planet.bn (id: 2) status ==--
  139.  
  140. conf_on_shared_storage : True
  141. Status up-to-date : True
  142. Hostname : europa.planet.bn
  143. Host ID : 2
  144. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  145. Score : 0
  146. stopped : False
  147. Local maintenance : False
  148. crc32 : 09036bef
  149. local_conf_timestamp : 46375
  150. Host timestamp : 46375
  151. Extra metadata (valid at timestamp):
  152. metadata_parse_version=1
  153. metadata_feature_version=1
  154. timestamp=46375 (Fri Jan 31 23:00:52 2020)
  155. host-id=2
  156. score=0
  157. vm_conf_refresh_time=46375 (Fri Jan 31 23:00:52 2020)
  158. conf_on_shared_storage=True
  159. maintenance=False
  160. state=EngineUnexpectedlyDown
  161. stopped=False
  162. timeout=Thu Jan 1 13:54:32 1970
  163.  
  164.  
  165. --== Host asia.planet.bn (id: 3) status ==--
  166.  
  167. conf_on_shared_storage : True
  168. Status up-to-date : True
  169. Hostname : asia.planet.bn
  170. Host ID : 3
  171. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Paused"}
  172. Score : 3400
  173. stopped : False
  174. Local maintenance : False
  175. crc32 : 53f62c0d
  176. local_conf_timestamp : 44835
  177. Host timestamp : 44835
  178. Extra metadata (valid at timestamp):
  179. metadata_parse_version=1
  180. metadata_feature_version=1
  181. timestamp=44835 (Fri Jan 31 23:00:47 2020)
  182. host-id=3
  183. score=3400
  184. vm_conf_refresh_time=44835 (Fri Jan 31 23:00:47 2020)
  185. conf_on_shared_storage=True
  186. maintenance=False
  187. state=EngineStarting
  188. stopped=False
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement