Advertisement
Guest User

AsiaNode

a guest
Jan 31st, 2020
215
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 14.61 KB | None | 0 0
  1. [root@asia ~]# service vdsmd status -l
  2. Redirecting to /bin/systemctl status -l vdsmd.service
  3. ● vdsmd.service - Virtual Desktop Server Manager
  4. Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
  5. Active: active (running) since Fri 2020-01-31 10:34:22 CET; 12h ago
  6. Process: 5140 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
  7. Main PID: 5370 (vdsmd)
  8. Tasks: 64
  9. CGroup: /system.slice/vdsmd.service
  10. ├─ 5370 /usr/bin/python2 /usr/share/vdsm/vdsmd
  11. ├─ 5731 /usr/libexec/ioprocess --read-pipe-fd 70 --write-pipe-fd 69 --max-threads 10 --max-queued-requests 10
  12. ├─22155 /usr/libexec/ioprocess --read-pipe-fd 47 --write-pipe-fd 46 --max-threads 10 --max-queued-requests 10
  13. ├─22162 /usr/libexec/ioprocess --read-pipe-fd 52 --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10
  14. └─22169 /usr/libexec/ioprocess --read-pipe-fd 58 --write-pipe-fd 57 --max-threads 10 --max-queued-requests 10
  15.  
  16. Jan 31 23:13:44 asia.planet.bn vdsm[5370]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  17. Jan 31 23:13:44 asia.planet.bn vdsm[5370]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  18. Jan 31 23:13:44 asia.planet.bn vdsm[5370]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  19. Jan 31 23:13:56 asia.planet.bn vdsm[5370]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  20. Jan 31 23:24:20 asia.planet.bn vdsm[5370]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.ovirt-guest-agent.0 already removed
  21. Jan 31 23:24:20 asia.planet.bn vdsm[5370]: WARN Attempting to remove a non existing network: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  22. Jan 31 23:24:20 asia.planet.bn vdsm[5370]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  23. Jan 31 23:24:20 asia.planet.bn vdsm[5370]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
  24. Jan 31 23:24:20 asia.planet.bn vdsm[5370]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
  25. Jan 31 23:24:32 asia.planet.bn vdsm[5370]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
  26. ##################################################################################################################
  27. [root@asia ~]# service libvirtd status -l
  28. Redirecting to /bin/systemctl status -l libvirtd.service
  29. ● libvirtd.service - Virtualization daemon
  30. Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  31. Drop-In: /etc/systemd/system/libvirtd.service.d
  32. └─unlimited-core.conf
  33. Active: active (running) since Fri 2020-01-31 10:34:16 CET; 12h ago
  34. Docs: man:libvirtd(8)
  35. https://libvirt.org
  36. Main PID: 4558 (libvirtd)
  37. Tasks: 17 (limit: 32768)
  38. CGroup: /system.slice/libvirtd.service
  39. └─4558 /usr/sbin/libvirtd --listen
  40.  
  41. Jan 31 10:34:12 asia.planet.bn systemd[1]: Starting Virtualization daemon...
  42. Jan 31 10:34:16 asia.planet.bn systemd[1]: Started Virtualization daemon.
  43. Jan 31 11:16:58 asia.planet.bn libvirtd[4558]: 2020-01-31 10:16:58.350+0000: 4558: info : libvirt version: 4.5.0, package: 23.el7_7.1 (CentOS BuildSystem <http://bugs.centos.org>, 2019-09-13-18:01:52, x86-02.bsys.centos.org)
  44. Jan 31 11:16:58 asia.planet.bn libvirtd[4558]: 2020-01-31 10:16:58.350+0000: 4558: info : hostname: asia.planet.bn
  45. Jan 31 11:16:58 asia.planet.bn libvirtd[4558]: 2020-01-31 10:16:58.350+0000: 4558: error : virNetSocketReadWire:1791 : Cannot recv data: Errore di input/output
  46. ##################################################################################################################
  47. [root@asia ~]# service ovirt-ha-agent status -l
  48. Redirecting to /bin/systemctl status -l ovirt-ha-agent.service
  49. ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
  50. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
  51. Active: active (running) since Fri 2020-01-31 10:34:45 CET; 12h ago
  52. Main PID: 5782 (ovirt-ha-agent)
  53. Tasks: 3
  54. CGroup: /system.slice/ovirt-ha-agent.service
  55. ├─ 5782 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
  56. └─22780 /bin/sh /usr/sbin/service sanlock status
  57.  
  58. Jan 31 21:48:52 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  59. Jan 31 21:59:28 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  60. Jan 31 22:10:03 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  61. Jan 31 22:20:38 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  62. Jan 31 22:31:13 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  63. Jan 31 22:41:49 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  64. Jan 31 22:52:25 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  65. Jan 31 23:03:09 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  66. Jan 31 23:13:45 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  67. Jan 31 23:24:21 asia.planet.bn ovirt-ha-agent[5782]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
  68. ##################################################################################################################
  69. [root@asia ~]# service ovirt-ha-broker status -l
  70. Redirecting to /bin/systemctl status -l ovirt-ha-broker.service
  71. ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker
  72. Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled)
  73. Active: active (running) since Fri 2020-01-31 10:33:55 CET; 12h ago
  74. Main PID: 3125 (ovirt-ha-broker)
  75. Tasks: 11
  76. CGroup: /system.slice/ovirt-ha-broker.service
  77. └─3125 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
  78.  
  79. Jan 31 10:33:55 asia.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
  80. ##################################################################################################################
  81. [root@asia ~]# service supervdsmd status -l
  82. Redirecting to /bin/systemctl status -l supervdsmd.service
  83. ● supervdsmd.service - Auxiliary vdsm service for running helper functions as root
  84. Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor preset: enabled)
  85. Active: active (running) since Fri 2020-01-31 10:34:16 CET; 12h ago
  86. Main PID: 4743 (supervdsmd)
  87. Tasks: 12
  88. CGroup: /system.slice/supervdsmd.service
  89. └─4743 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
  90.  
  91. Jan 31 10:34:16 asia.planet.bn systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
  92. Jan 31 10:34:16 asia.planet.bn supervdsmd[4743]: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
  93. ##################################################################################################################
  94. [root@asia ~]# service glusterd status -l
  95. Redirecting to /bin/systemctl status -l glusterd.service
  96. ● glusterd.service - GlusterFS, a clustered file-system server
  97. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  98. Drop-In: /etc/systemd/system/glusterd.service.d
  99. └─99-cpu.conf
  100. Active: active (running) since Fri 2020-01-31 10:34:16 CET; 12h ago
  101. Docs: man:glusterd(8)
  102. Process: 4434 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  103. Main PID: 4532 (glusterd)
  104. CGroup: /glusterfs.slice/glusterd.service
  105. ├─ 4532 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  106. ├─ 5073 /usr/sbin/glusterfsd -s singapore.planet.bn --volfile-id data.singapore.planet.bn.gluster_bricks-data-data -p /var/run/gluster/vols/data/singapore.planet.bn-gluster_bricks-data-data.pid -S /var/run/gluster/f26fb5a2484c7192.socket --brick-name /gluster_bricks/data/data -l /var/log/glusterfs/bricks/gluster_bricks-data-data.log --xlator-option *-posix.glusterd-uuid=fcf8fa77-af0b-40a9-b8c3-ad01882d13b4 --process-name brick --brick-port 49152 --xlator-option data-server.listen-port=49152
  107. ├─ 5159 /usr/sbin/glusterfsd -s singapore.planet.bn --volfile-id engine.singapore.planet.bn.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/singapore.planet.bn-gluster_bricks-engine-engine.pid -S /var/run/gluster/c762215157d7b556.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=fcf8fa77-af0b-40a9-b8c3-ad01882d13b4 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153
  108. ├─ 5240 /usr/sbin/glusterfsd -s singapore.planet.bn --volfile-id vmstore.singapore.planet.bn.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/singapore.planet.bn-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/dc71af0e152f8490.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_bricks-vmstore-vmstore.log --xlator-option *-posix.glusterd-uuid=fcf8fa77-af0b-40a9-b8c3-ad01882d13b4 --process-name brick --brick-port 49154 --xlator-option vmstore-server.listen-port=49154
  109. └─20932 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b0885a52468c68e7.socket --xlator-option *replicate*.node-uuid=fcf8fa77-af0b-40a9-b8c3-ad01882d13b4 --process-name glustershd --client-pid=-6
  110.  
  111. Jan 31 10:34:11 asia.planet.bn systemd[1]: Starting GlusterFS, a clustered file-system server...
  112. Jan 31 10:34:16 asia.planet.bn systemd[1]: Started GlusterFS, a clustered file-system server.
  113. Jan 31 10:34:20 asia.planet.bn glusterd[4532]: [2020-01-31 09:34:20.652683] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
  114. Jan 31 10:34:20 asia.planet.bn glusterd[4532]: [2020-01-31 09:34:20.947406] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
  115. Jan 31 10:34:21 asia.planet.bn glusterd[4532]: [2020-01-31 09:34:21.250014] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
  116. ##################################################################################################################
  117. [root@asia ~]# hosted-engine --vm-status
  118.  
  119.  
  120. --== Host america.planet.bn (id: 1) status ==--
  121.  
  122. conf_on_shared_storage : True
  123. Status up-to-date : True
  124. Hostname : america.planet.bn
  125. Host ID : 1
  126. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  127. Score : 0
  128. stopped : False
  129. Local maintenance : False
  130. crc32 : 6b3f8ec1
  131. local_conf_timestamp : 31635
  132. Host timestamp : 31635
  133. Extra metadata (valid at timestamp):
  134. metadata_parse_version=1
  135. metadata_feature_version=1
  136. timestamp=31635 (Fri Jan 31 23:34:26 2020)
  137. host-id=1
  138. score=0
  139. vm_conf_refresh_time=31635 (Fri Jan 31 23:34:26 2020)
  140. conf_on_shared_storage=True
  141. maintenance=False
  142. state=EngineUnexpectedlyDown
  143. stopped=False
  144. timeout=Thu Jan 1 09:47:24 1970
  145.  
  146.  
  147. --== Host europa.planet.bn (id: 2) status ==--
  148.  
  149. conf_on_shared_storage : True
  150. Status up-to-date : True
  151. Hostname : europa.planet.bn
  152. Host ID : 2
  153. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
  154. Score : 3400
  155. stopped : False
  156. Local maintenance : False
  157. crc32 : f28dd038
  158. local_conf_timestamp : 48392
  159. Host timestamp : 48391
  160. Extra metadata (valid at timestamp):
  161. metadata_parse_version=1
  162. metadata_feature_version=1
  163. timestamp=48391 (Fri Jan 31 23:34:29 2020)
  164. host-id=2
  165. score=3400
  166. vm_conf_refresh_time=48392 (Fri Jan 31 23:34:29 2020)
  167. conf_on_shared_storage=True
  168. maintenance=False
  169. state=EngineDown
  170. stopped=False
  171.  
  172.  
  173. --== Host asia.planet.bn (id: 3) status ==--
  174.  
  175. conf_on_shared_storage : True
  176. Status up-to-date : True
  177. Hostname : asia.planet.bn
  178. Host ID : 3
  179. Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Paused"}
  180. Score : 3400
  181. stopped : False
  182. Local maintenance : False
  183. crc32 : c34f8d17
  184. local_conf_timestamp : 46861
  185. Host timestamp : 46861
  186. Extra metadata (valid at timestamp):
  187. metadata_parse_version=1
  188. metadata_feature_version=1
  189. timestamp=46861 (Fri Jan 31 23:34:33 2020)
  190. host-id=3
  191. score=3400
  192. vm_conf_refresh_time=46861 (Fri Jan 31 23:34:33 2020)
  193. conf_on_shared_storage=True
  194. maintenance=False
  195. state=EngineStarting
  196. stopped=False
  197. [root@asia ~]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement