Advertisement
fweng322

Untitled

May 9th, 2022
51
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.38 KB | None | 0 0
  1.  
  2. ()
  3. 2022-05-10 10:23:18 starting migration of VM 268 to node 'node-002' (192.168.100.82)
  4. 2022-05-10 10:23:18 found local, replicated disk 'vmdisk1-zfs:vm-268-disk-0' (in current VM config)
  5. 2022-05-10 10:23:18 found local, replicated disk 'vmdisk1-zfs:vm-268-disk-1' (in current VM config)
  6. 2022-05-10 10:23:18 scsi0: start tracking writes using block-dirty-bitmap 'repl_scsi0'
  7. 2022-05-10 10:23:18 scsi1: start tracking writes using block-dirty-bitmap 'repl_scsi1'
  8. 2022-05-10 10:23:18 replicating disk images
  9. 2022-05-10 10:23:18 start replication job
  10. Qemu Guest Agent is not running - VM 268 qmp command 'guest-ping' failed - got timeout
  11. 2022-05-10 10:23:21 guest => VM 268, running => 1309383
  12. 2022-05-10 10:23:21 volumes => vmdisk1-zfs:vm-268-disk-0,vmdisk1-zfs:vm-268-disk-1
  13. 2022-05-10 10:23:22 create snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-0
  14. 2022-05-10 10:23:22 create snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-1
  15. 2022-05-10 10:23:22 using secure transmission, rate limit: 800 MByte/s
  16. 2022-05-10 10:23:22 full sync 'vmdisk1-zfs:vm-268-disk-0' (__replicate_268-0_1652149398__)
  17. 2022-05-10 10:23:22 using a bandwidth limit of 800000000 bps for transferring 'vmdisk1-zfs:vm-268-disk-0'
  18. 2022-05-10 10:23:23 full send of vmdisk1/vm-268-disk-0@__replicate_268-0_1652149398__ estimated size is 26.4G
  19. 2022-05-10 10:23:23 total estimated size is 26.4G
  20. 2022-05-10 10:23:23 volume 'vmdisk1/vm-268-disk-0' already exists
  21. 2022-05-10 10:23:23 command 'zfs send -Rpv -- vmdisk1/vm-268-disk-0@__replicate_268-0_1652149398__' failed: got signal 13
  22. send/receive failed, cleaning up snapshot(s)..
  23. 2022-05-10 10:23:23 delete previous replication snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-0
  24. 2022-05-10 10:23:23 delete previous replication snapshot '__replicate_268-0_1652149398__' on vmdisk1-zfs:vm-268-disk-1
  25. 2022-05-10 10:23:23 end replication job with error: command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
  26. 2022-05-10 10:23:23 ERROR: command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
  27. 2022-05-10 10:23:23 aborting phase 1 - cleanup resources
  28. 2022-05-10 10:23:23 scsi0: removing block-dirty-bitmap 'repl_scsi0'
  29. 2022-05-10 10:23:23 scsi1: removing block-dirty-bitmap 'repl_scsi1'
  30. 2022-05-10 10:23:23 ERROR: migration aborted (duration 00:00:05): command 'set -o pipefail && pvesm export vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ | /usr/bin/cstream -t 800000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-002' root@192.168.100.82 -- pvesm import vmdisk1-zfs:vm-268-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_268-0_1652149398__ -allow-rename 0' failed: exit code 255
  31. TASK ERROR: migration aborted
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement