Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@america ~]# service vdsmd status -l
- Redirecting to /bin/systemctl status -l vdsmd.service
- ● vdsmd.service - Virtual Desktop Server Manager
- Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
- Active: active (running) since Fri 2020-01-31 14:48:03 CET; 8h ago
- Process: 3916 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
- Main PID: 4285 (vdsmd)
- Tasks: 46
- CGroup: /system.slice/vdsmd.service
- ├─4285 /usr/bin/python2 /usr/share/vdsm/vdsmd
- └─4837 /usr/libexec/ioprocess --read-pipe-fd 70 --write-pipe-fd 69 --max-threads 10 --max-queued-requests 10
- Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
- Jan 31 22:31:37 america.planet.bn vdsm[4285]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
- Jan 31 22:31:41 america.planet.bn vdsm[4285]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.ovirt-guest-agent.0 already removed
- Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing network: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN Attempting to remove a non existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/lib/libvirt/qemu/channels/58be1383-8247-4759-86af-29347c52606d.org.qemu.guest_agent.0 already removed
- Jan 31 22:42:14 america.planet.bn vdsm[4285]: WARN File: /var/run/ovirt-vmconsole-console/58be1383-8247-4759-86af-29347c52606d.sock already removed
- Jan 31 22:42:18 america.planet.bn vdsm[4285]: WARN Attempting to add an existing net user: ovirtmgmt/58be1383-8247-4759-86af-29347c52606d
- ##################################################################################################################
- [root@america ~]# service libvirtd status -l
- Redirecting to /bin/systemctl status -l libvirtd.service
- ● libvirtd.service - Virtualization daemon
- Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
- Drop-In: /etc/systemd/system/libvirtd.service.d
- └─unlimited-core.conf
- Active: active (running) since Fri 2020-01-31 14:47:56 CET; 8h ago
- Docs: man:libvirtd(8)
- https://libvirt.org
- Main PID: 3691 (libvirtd)
- Tasks: 18 (limit: 32768)
- CGroup: /system.slice/libvirtd.service
- └─3691 /usr/sbin/libvirtd --listen
- Jan 31 22:21:06 america.planet.bn libvirtd[3691]: 2020-01-31T21:21:06.965197Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 22:21:06 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31 21:31:45.769+0000: 3691: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
- Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31 21:31:45.770+0000: 3691: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T21:31:45.696984Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
- Jan 31 22:31:45 america.planet.bn libvirtd[3691]: 2020-01-31T21:31:45.757960Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 22:31:45 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31 21:42:22.823+0000: 3691: error : qemuMonitorIORead:609 : Unable to read from monitor: Connessione interrotta dal corrispondente
- Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31 21:42:22.824+0000: 3691: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2020-01-31T21:42:22.753658Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
- Jan 31 22:42:22 america.planet.bn libvirtd[3691]: 2020-01-31T21:42:22.815861Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,id=ua-a466e5c3-1b29-4bd9-9a43-dec3e49a5717,bootindex=1,write-cache=on: Failed to get "write" lock
- Jan 31 22:42:22 america.planet.bn libvirtd[3691]: Is another process using the image [/var/run/vdsm/storage/fb9878a0-f641-4eac-a4c6-cea21a2502c5/a466e5c3-1b29-4bd9-9a43-dec3e49a5717/de3056f6-5a71-4711-9366-253fba90981b]?
- ##################################################################################################################
- [root@america ~]# service ovirt-ha-agent status -l
- Redirecting to /bin/systemctl status -l ovirt-ha-agent.service
- ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
- Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2020-01-31 14:48:16 CET; 8h ago
- Main PID: 4876 (ovirt-ha-agent)
- Tasks: 2
- CGroup: /system.slice/ovirt-ha-agent.service
- └─4876 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
- Jan 31 14:48:16 america.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
- ##################################################################################################################
- [root@america ~]# service ovirt-ha-broker status -l
- Redirecting to /bin/systemctl status -l ovirt-ha-broker.service
- ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker
- Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2020-01-31 14:47:37 CET; 8h ago
- Main PID: 2309 (ovirt-ha-broker)
- Tasks: 11
- CGroup: /system.slice/ovirt-ha-broker.service
- └─2309 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
- Jan 31 14:47:37 america.planet.bn systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
- Jan 31 14:48:22 america.planet.bn ovirt-ha-broker[2309]: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
- ##################################################################################################################
- [root@america ~]# service supervdsmd status -l
- Redirecting to /bin/systemctl status -l supervdsmd.service
- ● supervdsmd.service - Auxiliary vdsm service for running helper functions as root
- Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor preset: enabled)
- Active: active (running) since Fri 2020-01-31 14:47:56 CET; 8h ago
- Main PID: 3748 (supervdsmd)
- Tasks: 12
- CGroup: /system.slice/supervdsmd.service
- └─3748 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
- Jan 31 14:47:56 america.planet.bn systemd[1]: Started Auxiliary vdsm service for running helper functions as root.
- Jan 31 14:47:56 america.planet.bn supervdsmd[3748]: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
- ##################################################################################################################
- [root@america ~]# service glusterd status -l
- Redirecting to /bin/systemctl status -l glusterd.service
- ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Drop-In: /etc/systemd/system/glusterd.service.d
- └─99-cpu.conf
- Active: active (running) since Fri 2020-01-31 14:48:01 CET; 8h ago
- Docs: man:glusterd(8)
- Process: 3620 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 3667 (glusterd)
- CGroup: /glusterfs.slice/glusterd.service
- ├─3667 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─4096 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id data.kansas.planet.bn.gluster_bricks-data-data -p /var/run/gluster/vols/data/kansas.planet.bn-gluster_bricks-data-data.pid -S /var/run/gluster/033f7e831d979362.socket --brick-name /gluster_bricks/data/data -l /var/log/glusterfs/bricks/gluster_bricks-data-data.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49152 --xlator-option data-server.listen-port=49152
- ├─4118 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id engine.kansas.planet.bn.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/kansas.planet.bn-gluster_bricks-engine-engine.pid -S /var/run/gluster/e0af49b6cf29e032.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153
- ├─4303 /usr/sbin/glusterfsd -s kansas.planet.bn --volfile-id vmstore.kansas.planet.bn.gluster_bricks-vmstore-vmstore -p /var/run/gluster/vols/vmstore/kansas.planet.bn-gluster_bricks-vmstore-vmstore.pid -S /var/run/gluster/c7a798189ab9dfcb.socket --brick-name /gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_bricks-vmstore-vmstore.log --xlator-option *-posix.glusterd-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name brick --brick-port 49154 --xlator-option vmstore-server.listen-port=49154
- └─4653 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8c27e01e5551fb6.socket --xlator-option *replicate*.node-uuid=c973a79a-7660-4970-aaaf-7b918abb4cb5 --process-name glustershd --client-pid=-6
- Jan 31 14:47:55 america.planet.bn systemd[1]: Starting GlusterFS, a clustered file-system server...
- Jan 31 14:48:01 america.planet.bn systemd[1]: Started GlusterFS, a clustered file-system server.
- Jan 31 14:48:02 america.planet.bn glusterd[3667]: [2020-01-31 13:48:02.074045] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume data. Starting local bricks.
- Jan 31 14:48:03 america.planet.bn glusterd[3667]: [2020-01-31 13:48:03.036471] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume engine. Starting local bricks.
- Jan 31 14:48:03 america.planet.bn glusterd[3667]: [2020-01-31 13:48:03.894293] C [MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmstore. Starting local bricks.
- ##################################################################################################################
- [root@america ~]# hosted-engine --vm-status
- --== Host america.planet.bn (id: 1) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : america.planet.bn
- Host ID : 1
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
- Score : 0
- stopped : False
- Local maintenance : False
- crc32 : 7a6c1805
- local_conf_timestamp : 29626
- Host timestamp : 29626
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=29626 (Fri Jan 31 23:00:56 2020)
- host-id=1
- score=0
- vm_conf_refresh_time=29626 (Fri Jan 31 23:00:56 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineUnexpectedlyDown
- stopped=False
- timeout=Thu Jan 1 09:15:24 1970
- --== Host europa.planet.bn (id: 2) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : europa.planet.bn
- Host ID : 2
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
- Score : 0
- stopped : False
- Local maintenance : False
- crc32 : 09036bef
- local_conf_timestamp : 46375
- Host timestamp : 46375
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=46375 (Fri Jan 31 23:00:52 2020)
- host-id=2
- score=0
- vm_conf_refresh_time=46375 (Fri Jan 31 23:00:52 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineUnexpectedlyDown
- stopped=False
- timeout=Thu Jan 1 13:54:32 1970
- --== Host asia.planet.bn (id: 3) status ==--
- conf_on_shared_storage : True
- Status up-to-date : True
- Hostname : asia.planet.bn
- Host ID : 3
- Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Paused"}
- Score : 3400
- stopped : False
- Local maintenance : False
- crc32 : 53f62c0d
- local_conf_timestamp : 44835
- Host timestamp : 44835
- Extra metadata (valid at timestamp):
- metadata_parse_version=1
- metadata_feature_version=1
- timestamp=44835 (Fri Jan 31 23:00:47 2020)
- host-id=3
- score=3400
- vm_conf_refresh_time=44835 (Fri Jan 31 23:00:47 2020)
- conf_on_shared_storage=True
- maintenance=False
- state=EngineStarting
- stopped=False
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement