Advertisement
Guest User

error

a guest
Mar 18th, 2023
76
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.81 KB | None | 0 0
  1. WARNING:44k:/home/featurize/data/so-vits-svc-4.0 is not a git repository, therefore hash value comparison will be ignored.
  2. DEBUG:h5py._conv:Creating converter from 7 to 5
  3. DEBUG:h5py._conv:Creating converter from 5 to 7
  4. DEBUG:h5py._conv:Creating converter from 7 to 5
  5. DEBUG:h5py._conv:Creating converter from 5 to 7
  6. INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
  7. INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
  8. ./logs/44k/G_0.pth
  9. error, emb_g.weight is not in the checkpoint
  10. INFO:44k:emb_g.weight is not in the checkpoint
  11. load
  12. INFO:44k:Loaded checkpoint './logs/44k/G_0.pth' (iteration 1)
  13. ./logs/44k/D_0.pth
  14. load
  15. INFO:44k:Loaded checkpoint './logs/44k/D_0.pth' (iteration 1)
  16. INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
  17. /environment/miniconda3/lib/python3.7/site-packages/torch/autograd/__init__.py:199: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
  18. grad.sizes() = [32, 1, 4], strides() = [4, 1, 1]
  19. bucket_view.sizes() = [32, 1, 4], strides() = [4, 4, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:325.)
  20. allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
  21. Traceback (most recent call last):
  22. File "train.py", line 310, in <module>
  23. main()
  24. File "train.py", line 51, in main
  25. mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
  26. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
  27. return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  28. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
  29. while not context.join():
  30. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join
  31. raise ProcessRaisedException(msg, error_index, failed_process.pid)
  32. torch.multiprocessing.spawn.ProcessRaisedException:
  33.  
  34. -- Process 0 terminated with the following error:
  35. Traceback (most recent call last):
  36. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
  37. fn(i, *args)
  38. File "/home/featurize/data/so-vits-svc-4.0/train.py", line 120, in run
  39. [train_loader, eval_loader], logger, [writer, writer_eval])
  40. File "/home/featurize/data/so-vits-svc-4.0/train.py", line 202, in train_and_evaluate
  41. scaler.step(optim_g)
  42. File "/environment/miniconda3/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 313, in step
  43. return optimizer.step(*args, **kwargs)
  44. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
  45. return wrapped(*args, **kwargs)
  46. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/optimizer.py", line 140, in wrapper
  47. out = func(*args, **kwargs)
  48. File "/environment/miniconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
  49. return func(*args, **kwargs)
  50. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 176, in step
  51. capturable=group['capturable'])
  52. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 232, in adamw
  53. capturable=capturable)
  54. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 273, in _single_tensor_adamw
  55. exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
  56. RuntimeError: output with shape [1, 256] doesn't match the broadcast shape [200, 256]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement