Advertisement
Guest User

Untitled

a guest
Mar 18th, 2023
51
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.98 KB | None | 0 0
  1. python train.py -c configs/config.json -m 44k
  2. INFO:44k:{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'p3': 0}, 'model_dir': './logs/44k'}
  3. WARNING:44k:/home/featurize/data/so-vits-svc-4.0 is not a git repository, therefore hash value comparison will be ignored.
  4. DEBUG:h5py._conv:Creating converter from 7 to 5
  5. DEBUG:h5py._conv:Creating converter from 5 to 7
  6. DEBUG:h5py._conv:Creating converter from 7 to 5
  7. DEBUG:h5py._conv:Creating converter from 5 to 7
  8. INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
  9. INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
  10. ./logs/44k/G_0.pth
  11. error, emb_g.weight is not in the checkpoint
  12. INFO:44k:emb_g.weight is not in the checkpoint
  13. load
  14. INFO:44k:Loaded checkpoint './logs/44k/G_0.pth' (iteration 1)
  15. ./logs/44k/D_0.pth
  16. load
  17. INFO:44k:Loaded checkpoint './logs/44k/D_0.pth' (iteration 1)
  18. INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
  19. /environment/miniconda3/lib/python3.7/site-packages/torch/autograd/__init__.py:199: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
  20. grad.sizes() = [32, 1, 4], strides() = [4, 1, 1]
  21. bucket_view.sizes() = [32, 1, 4], strides() = [4, 4, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:325.)
  22. allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
  23. Traceback (most recent call last):
  24. File "train.py", line 310, in <module>
  25. main()
  26. File "train.py", line 51, in main
  27. mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
  28. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
  29. return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  30. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
  31. while not context.join():
  32. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join
  33. raise ProcessRaisedException(msg, error_index, failed_process.pid)
  34. torch.multiprocessing.spawn.ProcessRaisedException:
  35.  
  36. -- Process 0 terminated with the following error:
  37. Traceback (most recent call last):
  38. File "/environment/miniconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
  39. fn(i, *args)
  40. File "/home/featurize/data/so-vits-svc-4.0/train.py", line 120, in run
  41. [train_loader, eval_loader], logger, [writer, writer_eval])
  42. File "/home/featurize/data/so-vits-svc-4.0/train.py", line 202, in train_and_evaluate
  43. scaler.step(optim_g)
  44. File "/environment/miniconda3/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 313, in step
  45. return optimizer.step(*args, **kwargs)
  46. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
  47. return wrapped(*args, **kwargs)
  48. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/optimizer.py", line 140, in wrapper
  49. out = func(*args, **kwargs)
  50. File "/environment/miniconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
  51. return func(*args, **kwargs)
  52. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 176, in step
  53. capturable=group['capturable'])
  54. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 232, in adamw
  55. capturable=capturable)
  56. File "/environment/miniconda3/lib/python3.7/site-packages/torch/optim/adamw.py", line 273, in _single_tensor_adamw
  57. exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
  58. RuntimeError: output with shape [1, 256] doesn't match the broadcast shape [200, 256]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement