Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- (piper) quang@quang-MS-7817:~/Downloads/piper/src/python$ ./train.sh
- DEBUG:piper_train:Namespace(accelerator='gpu', accumulate_grad_batches=None, amp_backend='native', amp_level=None, auto_lr_find=False, auto_scale_batch_size=False, auto_select_gpus=False, batch_size=2, benchmark=None, check_val_every_n_epoch=1, checkpoint_epochs=None, dataset_dir='/home/quang/Downloads/piper/output/', default_root_dir=None, detect_anomaly=False, deterministic=None, devices='1', enable_checkpointing=True, enable_model_summary=True, enable_progress_bar=True, fast_dev_run=False, filter_channels=768, gpus=None, gradient_clip_algorithm=None, gradient_clip_val=None, hidden_channels=192, inter_channels=192, ipus=None, limit_predict_batches=None, limit_test_batches=None, limit_train_batches=None, limit_val_batches=None, log_every_n_steps=50, logger=True, max_epochs=10000, max_phoneme_ids=None, max_steps=-1, max_time=None, min_epochs=None, min_steps=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', n_heads=2, n_layers=6, num_nodes=1, num_processes=None, num_sanity_val_steps=2, num_test_examples=5, overfit_batches=0.0, plugins=None, precision=16, profiler=None, quality='medium', reload_dataloaders_every_n_epochs=0, replace_sampler_ddp=True, resume_from_checkpoint=None, seed=1234, strategy=None, sync_batchnorm=False, tpu_cores=None, track_grad_norm=-1, val_check_interval=None, validation_split=0.05, weights_save_path=None)
- Using 16bit native Automatic Mixed Precision (AMP)
- GPU available: True (cuda), used: True
- TPU available: False, using: 0 TPU cores
- IPU available: False, using: 0 IPUs
- HPU available: False, using: 0 HPUs
- INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmptm2ffvpg
- INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmptm2ffvpg/_remote_module_non_sriptable.py
- DEBUG:vits.dataset:Loading dataset: /home/quang/Downloads/piper/output/dataset.jsonl
- LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
- | Name | Type | Params
- -----------------------------------------------------
- 0 | model_g | SynthesizerTrn | 23.6 M
- 1 | model_d | MultiPeriodDiscriminator | 46.7 M
- -----------------------------------------------------
- 70.4 M Trainable params
- 0 Non-trainable params
- 70.4 M Total params
- 140.773 Total estimated model params size (MB)
- DEBUG:fsspec.local:open file: /home/quang/Downloads/piper/output/lightning_logs/version_8/hparams.yaml
- Sanity Checking: 0it [00:00, ?it/s]/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
- rank_zero_warn(
- Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:98: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 2. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
- warning_cache.warn(
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- Sanity Checking DataLoader 0: 50%|███████████████████████████████████████████████████████████████████████████████████████ | 1/2 [00:04<00:04, 4.41s/it]warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- warning: audio amplitude out of range, auto clipped.
- /home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
- rank_zero_warn(
- Epoch 0: 0%| | 0/273 [00:00<?, ?it/s]Traceback (most recent call last):
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/runpy.py", line 194, in _run_module_as_main
- return _run_code(code, main_globals, None,
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/runpy.py", line 87, in _run_code
- exec(code, run_globals)
- File "/home/quang/Downloads/piper/src/python/piper_train/__main__.py", line 95, in <module>
- main()
- File "/home/quang/Downloads/piper/src/python/piper_train/__main__.py", line 88, in main
- trainer.fit(model)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
- self._call_and_handle_interrupt(
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
- return trainer_fn(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
- results = self._run(model, ckpt_path=self.ckpt_path)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
- results = self._run_stage()
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
- return self._run_train()
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
- self.fit_loop.run()
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
- self.advance(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
- self._outputs = self.epoch_loop.run(self._data_fetcher)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
- self.advance(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
- batch_output = self.batch_loop.run(kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
- self.advance(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
- outputs = self.optimizer_loop.run(optimizers, kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
- self.advance(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
- result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
- self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
- self.trainer._call_lightning_module_hook(
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
- output = fn(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
- optimizer.step(closure=optimizer_closure)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
- step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
- return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 85, in optimizer_step
- closure_result = closure()
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in __call__
- self._result = self.closure(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 141, in closure
- self._backward_fn(step_output.closure_loss)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 304, in backward_fn
- self.trainer._call_strategy_hook("backward", loss, optimizer, opt_idx)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
- output = fn(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 191, in backward
- self.precision_plugin.backward(self.lightning_module, closure_loss, optimizer, optimizer_idx, *args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 80, in backward
- model.backward(closure_loss, optimizer, optimizer_idx, *args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1450, in backward
- loss.backward(*args, **kwargs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
- torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
- File "/home/quang/miniconda3/envs/piper/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
- Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
- RuntimeError: view_as_complex is only supported for float and double tensors, but got a tensor of scalar type: Half
- Epoch 0: 0%| | 0/273 [00:01<?, ?it/s]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement