Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [jalal@goku official_tut]$ python exp_loocv.py
- dataset size is: {'train': 10}
- Using sample 0 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0537 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0377 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.1477 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.1295 Acc: 0.0000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0534 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0660 Acc: 0.1000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0765 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0560 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0840 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0843 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0555 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0551 Acc: 0.1000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0888 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0875 Acc: 0.0000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0795 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0793 Acc: 0.0000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 1 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0550 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0551 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0587 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0584 Acc: 0.1000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0788 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0788 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0813 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0813 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0832 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0832 Acc: 0.0000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0577 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0577 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0594 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0594 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 2 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0808 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0808 Acc: 0.0000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0799 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0799 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0844 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0844 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0586 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0586 Acc: 0.1000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0818 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0818 Acc: 0.0000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0833 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0833 Acc: 0.0000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 3 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0843 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0843 Acc: 0.0000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0777 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0777 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0811 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0811 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0569 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0569 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0570 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0570 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0803 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0803 Acc: 0.0000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 4 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0588 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0588 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0833 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0833 Acc: 0.0000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0609 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0609 Acc: 0.1000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0815 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0815 Acc: 0.0000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0812 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0812 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0826 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0826 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0558 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0558 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0843 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0843 Acc: 0.0000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0563 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0563 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 5 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0805 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0805 Acc: 0.0000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0825 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0825 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0587 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0587 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0819 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0819 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0578 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0578 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 6 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0820 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0820 Acc: 0.0000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0836 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0836 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0586 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0586 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0579 Acc: 0.1000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0572 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0821 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0821 Acc: 0.0000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0565 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0565 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 7 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0594 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0594 Acc: 0.1000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0563 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0563 Acc: 0.1000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0587 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0587 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0801 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0588 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0588 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0806 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0806 Acc: 0.0000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 8 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0555 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0555 Acc: 0.1000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0555 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0555 Acc: 0.1000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0819 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0819 Acc: 0.0000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0590 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0809 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0813 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0813 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0591 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0832 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0832 Acc: 0.0000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- Using sample 9 as test data
- Batch 0
- Epoch 0/1
- ----------
- train Loss: 0.0827 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0827 Acc: 0.0000
- Training complete in 0m 0s
- Batch 1
- Epoch 0/1
- ----------
- train Loss: 0.0583 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0583 Acc: 0.1000
- Training complete in 0m 0s
- Batch 2
- Epoch 0/1
- ----------
- train Loss: 0.0571 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0571 Acc: 0.1000
- Training complete in 0m 0s
- Batch 3
- Epoch 0/1
- ----------
- train Loss: 0.0589 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0589 Acc: 0.1000
- Training complete in 0m 0s
- Batch 4
- Epoch 0/1
- ----------
- train Loss: 0.0799 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0799 Acc: 0.0000
- Training complete in 0m 0s
- Batch 5
- Epoch 0/1
- ----------
- train Loss: 0.0817 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0817 Acc: 0.0000
- Training complete in 0m 0s
- Batch 6
- Epoch 0/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Epoch 1/1
- ----------
- train Loss: 0.0822 Acc: 0.0000
- Training complete in 0m 0s
- Batch 7
- Epoch 0/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0581 Acc: 0.1000
- Training complete in 0m 0s
- Batch 8
- Epoch 0/1
- ----------
- train Loss: 0.0575 Acc: 0.1000
- Epoch 1/1
- ----------
- train Loss: 0.0575 Acc: 0.1000
- Training complete in 0m 0s
- torch.Size([1, 3, 224, 224])
- [jalal@goku official_tut]$
Add Comment
Please, Sign In to add comment