Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [CODE]PVEA1:
- Jan 16 23:17:01 pvea1 CRON[3192939]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
- Jan 16 23:17:01 pvea1 CRON[3192940]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
- Jan 16 23:17:01 pvea1 CRON[3192939]: pam_unix(cron:session): session closed for user root
- Jan 16 23:17:37 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is down
- Jan 16 23:17:37 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is down
- Jan 16 23:17:37 pvea1 kernel: vmbr2: port 2(eno3) entered disabled state
- Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] link: host: 2 link: 0 is down
- Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
- Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] host: host: 2 has no active links
- Jan 16 23:17:37 pvea1 corosync[4222]: [TOTEM ] Token has not been received in 2250 ms
- Jan 16 23:17:38 pvea1 kernel: vmbr1: port 1(eno2) entered disabled state
- Jan 16 23:17:38 pvea1 corosync[4222]: [TOTEM ] A processor failed, forming new configuration: token timed out (3000ms), waiting 3600ms for consensus.
- Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is up at 100 Mbps, full duplex
- Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: Flow control is on for TX and on for RX
- Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: EEE is enabled
- Jan 16 23:17:39 pvea1 kernel: vmbr2: port 2(eno3) entered blocking state
- Jan 16 23:17:39 pvea1 kernel: vmbr2: port 2(eno3) entered forwarding state
- Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is up at 100 Mbps, full duplex
- Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Flow control is on for TX and on for RX
- Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: EEE is enabled
- Jan 16 23:17:40 pvea1 kernel: vmbr1: port 1(eno2) entered blocking state
- Jan 16 23:17:40 pvea1 kernel: vmbr1: port 1(eno2) entered forwarding state
- Jan 16 23:17:42 pvea1 corosync[4222]: [QUORUM] Sync members[1]: 1
- Jan 16 23:17:42 pvea1 corosync[4222]: [QUORUM] Sync left[1]: 2
- Jan 16 23:17:42 pvea1 corosync[4222]: [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)
- Jan 16 23:17:42 pvea1 corosync[4222]: [TOTEM ] A new membership (1.103) was formed. Members left: 2
- Jan 16 23:17:42 pvea1 corosync[4222]: [TOTEM ] Failed to receive the leave message. failed: 2
- Jan 16 23:17:42 pvea1 pmxcfs[4217]: [dcdb] notice: members: 1/4217
- Jan 16 23:17:42 pvea1 pmxcfs[4217]: [status] notice: members: 1/4217
- Jan 16 23:17:43 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 10
- Jan 16 23:17:44 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 20
- Jan 16 23:17:45 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 30
- Jan 16 23:17:46 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 40
- Jan 16 23:17:46 pvea1 corosync[4222]: [QUORUM] Members[1]: 1
- Jan 16 23:17:46 pvea1 corosync[4222]: [MAIN ] Completed service synchronization, ready to provide service.
- Jan 16 23:17:46 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retried 42 times
- Jan 16 23:17:46 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'online' => 'unknown'
- Jan 16 23:18:13 pvea1 pvescheduler[3787580]: INFO: Finished Backup of VM 100 (00:18:07)
- Jan 16 23:18:16 pvea1 pvescheduler[3787580]: INFO: Backup job finished successfully
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'started' to 'fence'
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'unknown' => 'fence'
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: lost lock 'ha_agent_pvea2_lock - can't get cfs lock
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: successfully acquired lock 'ha_agent_pvea2_lock'
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: fencing: acknowledged - got agent lock for node 'pvea2'
- Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'fence' => 'unknown'
- Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 029391D719: uid=0 from=<root>
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 029391D719: message-id=<20240116221837.029391D719@pvea1.au.lan>
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 029391D719: from=<root@pvea1.au.lan>, size=1225, nrcpt=1 (queue active)
- Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 13C921D71A: uid=0 from=<root>
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 13C921D71A: message-id=<20240116221837.13C921D71A@pvea1.au.lan>
- Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'fence' to 'recovery'
- Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: recover service 'vm:101' from fenced node 'pvea2' to node 'pvea1'
- Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'recovery' to 'started' (node = pvea1)
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 13C921D71A: from=<root@pvea1.au.lan>, size=1291, nrcpt=1 (queue active)
- Jan 16 23:18:37 pvea1 proxmox-mail-fo[3433531]: pvea1 proxmox-mail-forward[3433531]: forward mail to <nope@nope.nope>
- Jan 16 23:18:37 pvea1 proxmox-mail-fo[3433533]: pvea1 proxmox-mail-forward[3433533]: forward mail to <nope@nope.nope>
- Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 423031D71B: uid=65534 from=<root>
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 423031D71B: message-id=<20240116221837.13C921D71A@pvea1.au.lan>
- Jan 16 23:18:37 pvea1 postfix/local[3433474]: 029391D719: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.37, delays=0.18/0.13/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
- Jan 16 23:18:37 pvea1 postfix/local[3433530]: 13C921D71A: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.26, delays=0.14/0.07/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 029391D719: removed
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 13C921D71A: removed
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 423031D71B: from=<root@pvea1.au.lan>, size=1456, nrcpt=1 (queue active)
- Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 4B4AE1D71C: uid=65534 from=<root>
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 4B4AE1D71C: message-id=<20240116221837.029391D719@pvea1.au.lan>
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 4B4AE1D71C: from=<root@pvea1.au.lan>, size=1390, nrcpt=1 (queue active)
- Jan 16 23:18:37 pvea1 postfix/smtp[3433541]: 423031D71B: to=<nope@nope.nope>, relay=none, delay=0.24, delays=0.09/0.01/0.15/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
- Jan 16 23:18:37 pvea1 postfix/smtp[3433548]: 4B4AE1D71C: to=<nope@nope.nope>, relay=none, delay=0.25, delays=0.12/0.01/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 423031D71B: removed
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 7A8501D98D: message-id=<20240116221837.7A8501D98D@pvea1.au.lan>
- Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 4B4AE1D71C: removed
- Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 7AA581D98E: message-id=<20240116221837.7AA581D98E@pvea1.au.lan>
- Jan 16 23:18:41 pvea1 pve-ha-lrm[4446]: watchdog active
- Jan 16 23:18:41 pvea1 pve-ha-lrm[4446]: status change wait_for_agent_lock => active
- Jan 16 23:18:41 pvea1 pve-ha-lrm[3433667]: starting service vm:101
- Jan 16 23:18:41 pvea1 pve-ha-lrm[3433668]: start VM 101: UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam:
- Jan 16 23:18:41 pvea1 pve-ha-lrm[3433667]: <root@pam> starting task UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam:
- Jan 16 23:18:43 pvea1 systemd[1]: Started 101.scope.
- Jan 16 23:18:43 pvea1 kernel: device tap101i0 entered promiscuous mode
- Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered disabled state
- Jan 16 23:18:44 pvea1 kernel: device fwpr101p0 entered promiscuous mode
- Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered forwarding state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
- Jan 16 23:18:44 pvea1 kernel: device fwln101i0 entered promiscuous mode
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
- Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
- Jan 16 23:18:44 pvea1 pve-ha-lrm[3433667]: <root@pam> end task UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam: OK
- Jan 16 23:18:44 pvea1 pve-ha-lrm[3433667]: service status vm:101 started
- Jan 16 23:19:03 pvea1 pvescheduler[3459945]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
- Jan 16 23:19:03 pvea1 pvescheduler[3459945]: QEMU Guest Agent is not running - VM 101 qmp command 'guest-ping' failed - got timeout
- Jan 16 23:19:22 pvea1 pvescheduler[3459945]: 101-0: got unexpected replication job error - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvea2' root@192.168.29.30 pvecm mtunnel -migration_network 10.99.99.2/24 -get_migration_ip' failed: exit code 255
- Jan 16 23:19:22 pvea1 postfix/pickup[3784564]: 69A621D71E: uid=0 from=<root>
- Jan 16 23:19:22 pvea1 postfix/cleanup[3433470]: 69A621D71E: message-id=<20240116221922.69A621D71E@pvea1.au.lan>
- Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 69A621D71E: from=<root@pvea1.au.lan>, size=756, nrcpt=1 (queue active)
- Jan 16 23:19:22 pvea1 proxmox-mail-fo[3470338]: pvea1 proxmox-mail-forward[3470338]: forward mail to <nope@nope.nope>
- Jan 16 23:19:22 pvea1 postfix/pickup[3784564]: 7683D1D5AD: uid=65534 from=<root>
- Jan 16 23:19:22 pvea1 postfix/cleanup[3433583]: 7683D1D5AD: message-id=<20240116221922.69A621D71E@pvea1.au.lan>
- Jan 16 23:19:22 pvea1 postfix/local[3433474]: 69A621D71E: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.08, delays=0.05/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
- Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 69A621D71E: removed
- Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 7683D1D5AD: from=<root@pvea1.au.lan>, size=921, nrcpt=1 (queue active)
- Jan 16 23:19:22 pvea1 postfix/smtp[3433541]: 7683D1D5AD: to=<nope@nope.nope>, relay=none, delay=0.15, delays=0.03/0/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
- Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 7683D1D5AD: removed
- Jan 16 23:19:22 pvea1 postfix/cleanup[3433470]: 994321D98F: message-id=<20240116221922.994321D98F@pvea1.au.lan>
- Jan 16 23:20:03 pvea1 pvescheduler[3476774]: 100-0: got unexpected replication job error - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvea2' root@192.168.29.30 pvecm mtunnel -migration_network 10.99.99.2/24 -get_migration_ip' failed: exit code 255
- Jan 16 23:20:03 pvea1 postfix/pickup[3784564]: A7BFE1CE34: uid=0 from=<root>
- Jan 16 23:20:03 pvea1 postfix/cleanup[3433583]: A7BFE1CE34: message-id=<20240116222003.A7BFE1CE34@pvea1.au.lan>
- Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: A7BFE1CE34: from=<root@pvea1.au.lan>, size=760, nrcpt=1 (queue active)
- Jan 16 23:20:03 pvea1 proxmox-mail-fo[3476895]: pvea1 proxmox-mail-forward[3476895]: forward mail to <nope@nope.nope>
- Jan 16 23:20:03 pvea1 postfix/pickup[3784564]: BCE741D71F: uid=65534 from=<root>
- Jan 16 23:20:03 pvea1 postfix/cleanup[3433470]: BCE741D71F: message-id=<20240116222003.A7BFE1CE34@pvea1.au.lan>
- Jan 16 23:20:03 pvea1 postfix/local[3433530]: A7BFE1CE34: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.11, delays=0.04/0/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
- Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: A7BFE1CE34: removed
- Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: BCE741D71F: from=<root@pvea1.au.lan>, size=925, nrcpt=1 (queue active)
- Jan 16 23:20:03 pvea1 postfix/smtp[3433548]: BCE741D71F: to=<nope@nope.nope>, relay=none, delay=0.15, delays=0.02/0.01/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
- Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: BCE741D71F: removed
- Jan 16 23:20:03 pvea1 postfix/cleanup[3433583]: E22061CE35: message-id=<20240116222003.E22061CE35@pvea1.au.lan>
- Jan 16 23:20:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is down
- Jan 16 23:20:41 pvea1 kernel: vmbr1: port 1(eno2) entered disabled state
- Jan 16 23:20:41 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is down
- Jan 16 23:20:42 pvea1 kernel: vmbr2: port 2(eno3) entered disabled state
- Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is up at 1000 Mbps, full duplex
- Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: Flow control is on for TX and on for RX
- Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: EEE is enabled
- Jan 16 23:20:43 pvea1 kernel: vmbr2: port 2(eno3) entered blocking state
- Jan 16 23:20:43 pvea1 kernel: vmbr2: port 2(eno3) entered forwarding state
- Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is up at 1000 Mbps, full duplex
- Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: Flow control is on for TX and on for RX
- Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: EEE is enabled
- Jan 16 23:20:44 pvea1 kernel: vmbr1: port 1(eno2) entered blocking state
- Jan 16 23:20:44 pvea1 kernel: vmbr1: port 1(eno2) entered forwarding state
- Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] rx: host: 2 link: 0 is up
- Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] link: Resetting MTU for link 0 because host 2 joined
- Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
- Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] pmtud: Global data MTU changed to: 1397
- Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Sync members[2]: 1 2
- Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Sync joined[1]: 2
- Jan 16 23:20:46 pvea1 corosync[4222]: [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)
- Jan 16 23:20:46 pvea1 corosync[4222]: [TOTEM ] A new membership (1.108) was formed. Members joined: 2
- Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Members[2]: 1 2
- Jan 16 23:20:46 pvea1 corosync[4222]: [MAIN ] Completed service synchronization, ready to provide service.
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: members: 1/4217, 2/12277
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: starting data syncronisation
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: members: 1/4217, 2/12277
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: starting data syncronisation
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: received sync request (epoch 1/4217/0000000D)
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: received sync request (epoch 1/4217/0000000D)
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: received all states
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: leader is 1/4217
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: synced members: 1/4217
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: start sending inode updates
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: sent all (11) updates
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: all data is up to date
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: received all states
- Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: all data is up to date
- Jan 16 23:20:50 pvea1 pmxcfs[4217]: [status] notice: received log
- Jan 16 23:20:50 pvea1 pmxcfs[4217]: [status] notice: received log
- Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'unknown' => 'online'
- Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: migrate service 'vm:101' to node 'pvea2' (running)
- Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'started' to 'migrate' (node = pvea1, target = pvea2)
- Jan 16 23:21:01 pvea1 pve-ha-lrm[3481555]: <root@pam> starting task UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:
- Jan 16 23:21:06 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
- Jan 16 23:21:11 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
- Jan 16 23:21:16 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
- Jan 16 23:21:21 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
- Jan 16 23:21:21 pvea1 pve-ha-lrm[3481556]: zfs error: could not find any snapshots to destroy; check snapshot names.
- Jan 16 23:21:24 pvea1 pmxcfs[4217]: [status] notice: received log
- Jan 16 23:21:25 pvea1 pmxcfs[4217]: [status] notice: received log
- Jan 16 23:21:26 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
- [/CODE]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement