Advertisement
chinhhut

piper train error

May 11th, 2023
117
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 11.34 KB | None | 0 0
  1. (piper) quang@quang-MS-7817:~/Downloads/piper/src/python$ ./train.sh
  2. DEBUG:piper_train:Namespace(accelerator='gpu', accumulate_grad_batches=None, amp_backend='native', amp_level=None, auto_lr_find=False, auto_scale_batch_size=False, auto_select_gpus=False, batch_size=2, benchmark=None, check_val_every_n_epoch=1, checkpoint_epochs=None, dataset_dir='/home/quang/Downloads/piper/output/', default_root_dir=None, detect_anomaly=False, deterministic=None, devices='1', enable_checkpointing=True, enable_model_summary=True, enable_progress_bar=True, fast_dev_run=False, filter_channels=768, gpus=None, gradient_clip_algorithm=None, gradient_clip_val=None, hidden_channels=192, inter_channels=192, ipus=None, limit_predict_batches=None, limit_test_batches=None, limit_train_batches=None, limit_val_batches=None, log_every_n_steps=50, logger=True, max_epochs=10000, max_phoneme_ids=None, max_steps=-1, max_time=None, min_epochs=None, min_steps=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', n_heads=2, n_layers=6, num_nodes=1, num_processes=None, num_sanity_val_steps=2, num_test_examples=5, overfit_batches=0.0, plugins=None, precision=16, profiler=None, quality='medium', reload_dataloaders_every_n_epochs=0, replace_sampler_ddp=True, resume_from_checkpoint=None, seed=1234, strategy=None, sync_batchnorm=False, tpu_cores=None, track_grad_norm=-1, val_check_interval=None, validation_split=0.05, weights_save_path=None)
  3. Using 16bit native Automatic Mixed Precision (AMP)
  4. GPU available: True (cuda), used: True
  5. TPU available: False, using: 0 TPU cores
  6. IPU available: False, using: 0 IPUs
  7. HPU available: False, using: 0 HPUs
  8. INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmptm2ffvpg
  9. INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmptm2ffvpg/_remote_module_non_sriptable.py
  10. DEBUG:vits.dataset:Loading dataset: /home/quang/Downloads/piper/output/dataset.jsonl
  11. LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
  12.  
  13. | Name | Type | Params
  14. -----------------------------------------------------
  15. 0 | model_g | SynthesizerTrn | 23.6 M
  16. 1 | model_d | MultiPeriodDiscriminator | 46.7 M
  17. -----------------------------------------------------
  18. 70.4 M Trainable params
  19. 0 Non-trainable params
  20. 70.4 M Total params
  21. 140.773 Total estimated model params size (MB)
  22. DEBUG:fsspec.local:open file: /home/quang/Downloads/piper/output/lightning_logs/version_8/hparams.yaml
  23. Sanity Checking: 0it [00:00, ?it/s]/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  24. rank_zero_warn(
  25. Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:98: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 2. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
  26. warning_cache.warn(
  27. warning: audio amplitude out of range, auto clipped.
  28. warning: audio amplitude out of range, auto clipped.
  29. warning: audio amplitude out of range, auto clipped.
  30. warning: audio amplitude out of range, auto clipped.
  31. warning: audio amplitude out of range, auto clipped.
  32. Sanity Checking DataLoader 0: 50%|███████████████████████████████████████████████████████████████████████████████████████ | 1/2 [00:04<00:04, 4.41s/it]warning: audio amplitude out of range, auto clipped.
  33. warning: audio amplitude out of range, auto clipped.
  34. warning: audio amplitude out of range, auto clipped.
  35. warning: audio amplitude out of range, auto clipped.
  36. warning: audio amplitude out of range, auto clipped.
  37. /home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  38. rank_zero_warn(
  39. Epoch 0: 0%| | 0/273 [00:00<?, ?it/s]Traceback (most recent call last):
  40. File "/home/quang/miniconda3/envs/piper/lib/python3.8/runpy.py", line 194, in _run_module_as_main
  41. return _run_code(code, main_globals, None,
  42. File "/home/quang/miniconda3/envs/piper/lib/python3.8/runpy.py", line 87, in _run_code
  43. exec(code, run_globals)
  44. File "/home/quang/Downloads/piper/src/python/piper_train/__main__.py", line 95, in <module>
  45. main()
  46. File "/home/quang/Downloads/piper/src/python/piper_train/__main__.py", line 88, in main
  47. trainer.fit(model)
  48. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
  49. self._call_and_handle_interrupt(
  50. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
  51. return trainer_fn(*args, **kwargs)
  52. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
  53. results = self._run(model, ckpt_path=self.ckpt_path)
  54. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
  55. results = self._run_stage()
  56. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
  57. return self._run_train()
  58. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
  59. self.fit_loop.run()
  60. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
  61. self.advance(*args, **kwargs)
  62. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
  63. self._outputs = self.epoch_loop.run(self._data_fetcher)
  64. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
  65. self.advance(*args, **kwargs)
  66. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
  67. batch_output = self.batch_loop.run(kwargs)
  68. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
  69. self.advance(*args, **kwargs)
  70. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
  71. outputs = self.optimizer_loop.run(optimizers, kwargs)
  72. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
  73. self.advance(*args, **kwargs)
  74. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
  75. result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
  76. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
  77. self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
  78. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
  79. self.trainer._call_lightning_module_hook(
  80. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
  81. output = fn(*args, **kwargs)
  82. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
  83. optimizer.step(closure=optimizer_closure)
  84. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
  85. step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  86. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
  87. return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  88. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 85, in optimizer_step
  89. closure_result = closure()
  90. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in __call__
  91. self._result = self.closure(*args, **kwargs)
  92. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 141, in closure
  93. self._backward_fn(step_output.closure_loss)
  94. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 304, in backward_fn
  95. self.trainer._call_strategy_hook("backward", loss, optimizer, opt_idx)
  96. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
  97. output = fn(*args, **kwargs)
  98. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 191, in backward
  99. self.precision_plugin.backward(self.lightning_module, closure_loss, optimizer, optimizer_idx, *args, **kwargs)
  100. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 80, in backward
  101. model.backward(closure_loss, optimizer, optimizer_idx, *args, **kwargs)
  102. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1450, in backward
  103. loss.backward(*args, **kwargs)
  104. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
  105. torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  106. File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
  107. Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
  108. RuntimeError: view_as_complex is only supported for float and double tensors, but got a tensor of scalar type: Half
  109. Epoch 0: 0%| | 0/273 [00:01<?, ?it/s]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement