Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ()
- 2022-05-10 10:23:18 starting migration of VM 268 to node 'node-002' (192.168.100.82)
- 2022-05-10 10:23:18 found local, replicated disk 'vmdisk1-zfs:vm-268-disk-0' (in current VM config)
- 2022-05-10 10:23:18 found local, replicated disk 'vmdisk1-zfs:vm-268-disk-1' (in current VM config)
- 2022-05-10 10:23:18 scsi0: start tracking writes using block-dirty-bitmap 'repl_scsi0'
- 2022-05-10 10:23:18 scsi1: start tracking writes using block-dirty-bitmap 'repl_scsi1'
- 2022-05-10 10:23:18 replicating disk images
- 2022-05-10 10:23:18 start replication job
- Qemu Guest Agent is not running - VM 268 qmp command 'guest-ping' failed - got timeout
- 2022-05-10 10:23:21 guest => VM 268, running => 1309383
- 2022-05-10 10:23:21 volumes => vmdisk1-zfs:vm-268-disk-0,vmdisk1-zfs:vm-268-disk-1
- 2022-05-10 10:23:22 create snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-0
- 2022-05-10 10:23:22 create snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-1
- 2022-05-10 10:23:22 using secure transmission, rate limit: 800 MByte/s
- 2022-05-10 10:23:22 full sync 'vmdisk1-zfs:vm-268-disk-0' (__replicate_268-0_1652149398__)
- 2022-05-10 10:23:22 using a bandwidth limit of 800000000 bps for transferring 'vmdisk1-zfs:vm-268-disk-0'
- 2022-05-10 10:23:23 full send of vmdisk1/vm-268-disk-0@__replicate_268-0_1652149398__ estimated size is 26.4G
- 2022-05-10 10:23:23 total estimated size is 26.4G
- 2022-05-10 10:23:23 volume 'vmdisk1/vm-268-disk-0' already exists
- 2022-05-10 10:23:23 command 'zfs send -Rpv -- vmdisk1/vm-268-disk-0@__replicate_268-0_1652149398__' failed: got signal 13
- send/receive failed, cleaning up snapshot(s)..
- 2022-05-10 10:23:23 delete previous replication snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-0
- 2022-05-10 10:23:23 delete previous replication snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-1
- 2022-05-10 10:23:23 end replication job with error: command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
- 2022-05-10 10:23:23 ERROR: command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
- 2022-05-10 10:23:23 aborting phase 1 - cleanup resources
- 2022-05-10 10:23:23 scsi0: removing block-dirty-bitmap 'repl_scsi0'
- 2022-05-10 10:23:23 scsi1: removing block-dirty-bitmap 'repl_scsi1'
- 2022-05-10 10:23:23 ERROR: migration aborted (duration 00:00:05): command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
- TASK ERROR: migration aborted
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement