Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- emotionnet-worker-audio | [2022-10-09 22:57:36,118: WARNING/ForkPoolWorker-2] X_test:(15, 128, 517), Y_test:(15,)
- emotionnet-worker-audio | [2022-10-09 22:57:36,178: WARNING/ForkPoolWorker-2]
- emotionnet-worker-audio | Selected device is cpu
- emotionnet-worker-audio | [2022-10-09 22:57:36,189: WARNING/ForkPoolWorker-2] Number of trainable params:
- emotionnet-worker-audio | [2022-10-09 22:57:36,189: WARNING/ForkPoolWorker-2]
- emotionnet-worker-audio | [2022-10-09 22:57:36,190: WARNING/ForkPoolWorker-2] 394855
- 0%| | 0/10 [00:00<?, ?it/s]22:57:36,190: WARNING/ForkPoolWorker-2]
- emotionnet-worker-audio | [2022-10-09 22:57:36,190: WARNING/ForkPoolWorker-2]
- emotionnet-worker-audio | Start training 10 epoches...
- 0%| | 0/10 [00:00<?, ?it/s]22:57:36,370: WARNING/ForkPoolWorker-2]
- emotionnet-worker-audio | [2022-10-09 22:57:36,409: ERROR/ForkPoolWorker-2] Task train_audio[f9d7121c-10db-45db-a953-61026dd193fe] raised unexpected: RuntimeError('The size of tensor a (7) must match the size of tensor b (8) at non-singleton dimension 1')
- emotionnet-worker-audio | Traceback (most recent call last):
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
- emotionnet-worker-audio | R = retval = fun(*args, **kwargs)
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
- emotionnet-worker-audio | return self.run(*args, **kwargs)
- emotionnet-worker-audio | File "/code/app/worker.py", line 208, in create_task_train_audio
- emotionnet-worker-audio | result = train(
- emotionnet-worker-audio | File "/code/ml/train_audio.py", line 565, in train
- emotionnet-worker-audio | loss, acc = train_step(X_tensor, Y_tensor)
- emotionnet-worker-audio | File "/code/ml/train_audio.py", line 134, in train_step
- emotionnet-worker-audio | loss = loss_fnc(output_logits, Y)
- emotionnet-worker-audio | File "/code/ml/train_audio.py", line 528, in loss_fnc
- emotionnet-worker-audio | return nn.L1Loss()(input=predictions, target=targets)
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- emotionnet-worker-audio | return forward_call(*input, **kwargs)
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 96, in forward
- emotionnet-worker-audio | return F.l1_loss(input, target, reduction=self.reduction)
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/torch/nn/functional.py", line 3248, in l1_loss
- emotionnet-worker-audio | expanded_input, expanded_target = torch.broadcast_tensors(input, target)
- emotionnet-worker-audio | File "/usr/local/lib/python3.8/site-packages/torch/functional.py", line 73, in broadcast_tensors
- emotionnet-worker-audio | return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined]
- emotionnet-worker-audio | RuntimeError: The size of tensor a (7) must match the size of tensor b (8) at non-singleton dimension 1
Advertisement
Add Comment
Please, Sign In to add comment