Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@europa ~]# service vdsmd status -l
- Redirecting to /bin/systemctl status -l vdsmd.service
- ● vdsmd.service - Virtual Desktop Server Manager
- Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
- Active: active (running) since Fri 2020-01-31 11:07:52 CET; 12h ago
- Process: 30293 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
- Main PID: 30397 (vdsmd)
- Tasks: 46
- CGroup: /system.slice/vdsmd.service
- ├─30397 /usr/bin/python2 /usr/share/vdsm/vdsmd
- └─30547 /usr/libexec/ioprocess --read-pipe-fd 64 --write-pipe-fd 63 --max-threads 10 --max-queued-requests 10
- Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
- Jan 31 23:03:33 europa.planet.bn vdsm[30397]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
- Jan 31 23:03:36 europa.planet.bn vdsm[30397]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.ovirt-guest-agent.0 already removed
- Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing network: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
- Jan 31 23:14:08 europa.planet.bn vdsm[30397]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
- Jan 31 23:14:10 europa.planet.bn vdsm[30397]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- ##################################################################################################################
- [root@europa ~]# service libvirtd status -l
- Redirecting to /bin/systemctl status -l libvirtd.service
- ● libvirtd.service - Virtualization daemon
- Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
- Drop-In: /etc/systemd/system/libvirtd.service.d
- └─unlimited-core.conf
- Active: active (running) since Fri 2020-01-31 10:08:43 CET; 13h ago
- Docs: man:libvirtd(8)
- https://libvirt.org
- Main PID: 4540 (libvirtd)
- Tasks: 18 (limit: 32768)
- CGroup: /system.slice/libvirtd.service
- └─4540 /usr/sbin/libvirtd --listen
- Jan 31 22:52:52 europa.planet.bn libvirtd[4540]: 2020-01-31T21:52:52.891410Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 22:52:52 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31 22:03:38.919+0000: 4540: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
- Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31 22:03:38.920+0000: 4540: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T22:03:38.848199Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
- Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: 2020-01-31T22:03:38.908211Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 23:03:38 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31 22:14:13.645+0000: 4540: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
- Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31 22:14:13.646+0000: 4540: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T22:14:13.565981Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
- Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: 2020-01-31T22:14:13.629819Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 23:14:13 europa.planet.bn libvirtd[4540]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- ##################################################################################################################
- [root@europa ~]# service ovirt-ha-agent status -l
- Redirecting to /bin/systemctl status -l ovirt-ha-agent.service
- ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
- Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2020-01-31 11:04:05 CET; 12h ago
- Main PID: 27892 (ovirt-ha-agent)
- Tasks: 4
- CGroup: /system.slice/ovirt-ha-agent.service
- └─27892 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
- Jan 31 11:04:05 europa.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
- ##################################################################################################################
- [root@europa ~]# service ovirt-ha-broker status -l
- Redirecting to /bin/systemctl status -l ovirt-ha-broker.service
- ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker
- Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2020-01-31 10:34:26 CET; 12h ago
- Main PID: 16065 (ovirt-ha-broker)
- Tasks: 12
- CGroup: /system.slice/ovirt-ha-broker.service
- └─16065 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
- Jan 31 10:34:26 europa.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
- Jan 31 10:38:48 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to start monitoring domain (sd_uuid=fb9878a0-f641-4eac-a4c6-cea21a2502c5, host_id=2): timeout during domain acquisition
- Jan 31 10:38:48 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.Action.start_domain_monitor ERROR Error in RPC call: Failed to start monitoring domain (sd_uuid=fb9878a0-f641-4eac-a4c6-cea21a2502c5, host_id=2): timeout during domain acquisition
- Jan 31 10:49:43 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor mgmt-bridge, args {'use_ssl': 'true', 'bridge_name': 'ovirtmgmt', 'address': '0'}
- Traceback (most recent call last):
- File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker
- self.action(self._options)
- File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/mgmt_bridge.py", line 47, in action
- stats = cli.Host.getStats()
- File "/usr/lib/python2.7/site-packages/vdsm/client.py", line 294, in _call
- raise TimeoutError(method, kwargs, timeout)
- TimeoutError: Request Host.getStats with args {} timed out after 900 seconds
- Jan 31 10:49:44 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor mem-free, args {'use_ssl': 'true', 'address': '0'}
- Traceback (most recent call last):
- File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker
- self.action(self._options)
- File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/mem_free.py", line 43, in action
- stats = cli.Host.getStats()
- File "/usr/lib/python2.7/site-packages/vdsm/client.py", line 294, in _call
- raise TimeoutError(method, kwargs, timeout)
- TimeoutError: Request Host.getStats with args {} timed out after 900 seconds
- Jan 31 11:07:57 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
- Jan 31 11:08:07 europa.planet.bn ovirt-ha-broker[16065]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
- ##################################################################################################################
- [root@europa ~]# service supervdsmd status -l
- Redirecting to /bin/systemctl status -l supervdsmd.service
- ● supervdsmd.service - Auxiliary vdsm service for running helper functions as root
- Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor preset: enabled)
- Active: active (running) since Fri 2020-01-31 10:08:43 CET; 13h ago
- Main PID: 4719 (supervdsmd)
- Tasks: 12
- CGroup: /system.slice/supervdsmd.service
- └─4719 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
- Jan 31 10:08:43 europa.planet.bn systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
- Jan 31 10:08:43 europa.planet.bn supervdsmd[4719]: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
- ##################################################################################################################
- [root@europa ~]# service glusterd status -l
- Redirecting to /bin/systemctl status -l glusterd.service
- ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Drop-In: /etc/systemd/system/glusterd.service.d
- └─99-cpu.conf
- Active: active (running) since Fri 2020-01-31 10:08:46 CET; 13h ago
- Docs: man:glusterd(8)
- Process: 4436 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 4525 (glusterd)
- CGroup: /glusterfs.slice/glusterd.service
- ├─ 4525 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─15868 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id data.germany.planet.bn.gluster_bricks-data-data -p /var/run/gluster/vols/data/germany.planet.bn-gluster_bricks-data-data.pid -S /var/run/gluster/9413c67344baa043.socket --brick-name /gluster_bricks/data/data -l /var/log/glusterfs/bricks/gluster_bricks-data-data.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49152 --xlator-option data-server.listen-port=49152
- ├─15880 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id engine.germany.planet.bn.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/germany.planet.bn-gluster_bricks-engine-engine.pid -S /var/run/gluster/db163ce21a8faee6.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153
- ├─15892 /usr/sbin/glusterfsd -s germany.planet.bn --volfile-id vmstore.germany.planet.bn.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/germany.planet.bn-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/9df5d3dc57f1e3f8.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_bricks-vmstore-vmstore.log --xlator-option *-posix.glusterd-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name brick --brick-port 49154 --xlator-option vmstore-server.listen-port=49154
- └─17926 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/eed6216d151e3b54.socket --xlator-option *replicate*.node-uuid=4de4e0b0-c389-4f95-b41d-35b65e1bf274 --process-name glustershd --client-pid=-6
- Jan 31 10:08:46 europa.planet.bn systemd[1]: Started GlusterFS, a clustered file-system server.
- Jan 31 10:08:46 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:46.889947] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
- Jan 31 10:08:47 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:47.812224] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
- Jan 31 10:08:48 europa.planet.bn glusterd[4525]: [2020-01-31 09:08:48.182653] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
- Jan 31 10:32:02 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:02.516960] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume data. Stopping local bricks.
- Jan 31 10:32:03 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:03.517693] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume engine. Stopping local bricks.
- Jan 31 10:32:04 europa.planet.bn glusterd[4525]: [2020-01-31 09:32:04.518347] C [MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume vmstore. Stopping local bricks.
- Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.168824] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
- Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.515186] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
- Jan 31 10:34:16 europa.planet.bn glusterd[4525]: [2020-01-31 09:34:16.801795] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
- ##################################################################################################################
- [root@europa ~]# hosted-engine --vm-status
- --== Host america.planet.bn (id: 1) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : america.planet.bn
- Host ID : 1
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
- Score : 0
- stopped : False
- Local maintenance : False
- crc32 : 59e77475
- local_conf_timestamp : 30629
- Host timestamp : 30629
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=30629 (Fri Jan 31 23:17:39 2020)
- host-id=1
- score=0
- vm_conf_refresh_time=30629 (Fri Jan 31 23:17:39 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineUnexpectedlyDown
- stopped=False
- timeout=Thu Jan 1 09:36:39 1970
- --== Host europa.planet.bn (id: 2) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : europa.planet.bn
- Host ID : 2
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
- Score : 0
- stopped : False
- Local maintenance : False
- crc32 : 2d54ca79
- local_conf_timestamp : 47384
- Host timestamp : 47384
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=47384 (Fri Jan 31 23:17:41 2020)
- host-id=2
- score=0
- vm_conf_refresh_time=47384 (Fri Jan 31 23:17:41 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineUnexpectedlyDown
- stopped=False
- timeout=Thu Jan 1 14:15:53 1970
- --== Host asia.planet.bn (id: 3) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : asia.planet.bn
- Host ID : 3
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Paused"}
- Score : 3400
- stopped : False
- Local maintenance : False
- crc32 : 8c5502c2
- local_conf_timestamp : 45845
- Host timestamp : 45845
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=45845 (Fri Jan 31 23:17:37 2020)
- host-id=3
- score=3400
- vm_conf_refresh_time=45845 (Fri Jan 31 23:17:37 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineStarting
- stopped=False
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement