lamiastella

loocv transfer learning

Nov 21st, 2018
203
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 13.96 KB | None | 0 0
  1. [jalal@goku official_tut]$ python exp_loocv.py
  2. dataset size is: {'train': 10}
  3. Using sample 0 as test data
  4. Batch 0
  5. Epoch 0/1
  6. ----------
  7. train Loss: 0.0537 Acc: 0.1000
  8.  
  9. Epoch 1/1
  10. ----------
  11. train Loss: 0.0377 Acc: 0.1000
  12.  
  13. Training complete in 0m 0s
  14. Batch 1
  15. Epoch 0/1
  16. ----------
  17. train Loss: 0.1477 Acc: 0.0000
  18.  
  19. Epoch 1/1
  20. ----------
  21. train Loss: 0.1295 Acc: 0.0000
  22.  
  23. Training complete in 0m 0s
  24. Batch 2
  25. Epoch 0/1
  26. ----------
  27. train Loss: 0.0534 Acc: 0.1000
  28.  
  29. Epoch 1/1
  30. ----------
  31. train Loss: 0.0660 Acc: 0.1000
  32.  
  33. Training complete in 0m 0s
  34. Batch 3
  35. Epoch 0/1
  36. ----------
  37. train Loss: 0.0765 Acc: 0.0000
  38.  
  39. Epoch 1/1
  40. ----------
  41. train Loss: 0.0560 Acc: 0.1000
  42.  
  43. Training complete in 0m 0s
  44. Batch 4
  45. Epoch 0/1
  46. ----------
  47. train Loss: 0.0840 Acc: 0.0000
  48.  
  49. Epoch 1/1
  50. ----------
  51. train Loss: 0.0843 Acc: 0.0000
  52.  
  53. Training complete in 0m 0s
  54. Batch 5
  55. Epoch 0/1
  56. ----------
  57. train Loss: 0.0555 Acc: 0.1000
  58.  
  59. Epoch 1/1
  60. ----------
  61. train Loss: 0.0551 Acc: 0.1000
  62.  
  63. Training complete in 0m 0s
  64. Batch 6
  65. Epoch 0/1
  66. ----------
  67. train Loss: 0.0888 Acc: 0.0000
  68.  
  69. Epoch 1/1
  70. ----------
  71. train Loss: 0.0875 Acc: 0.0000
  72.  
  73. Training complete in 0m 0s
  74. Batch 7
  75. Epoch 0/1
  76. ----------
  77. train Loss: 0.0572 Acc: 0.1000
  78.  
  79. Epoch 1/1
  80. ----------
  81. train Loss: 0.0572 Acc: 0.1000
  82.  
  83. Training complete in 0m 0s
  84. Batch 8
  85. Epoch 0/1
  86. ----------
  87. train Loss: 0.0795 Acc: 0.0000
  88.  
  89. Epoch 1/1
  90. ----------
  91. train Loss: 0.0793 Acc: 0.0000
  92.  
  93. Training complete in 0m 0s
  94. torch.Size([1, 3, 224, 224])
  95. Using sample 1 as test data
  96. Batch 0
  97. Epoch 0/1
  98. ----------
  99. train Loss: 0.0550 Acc: 0.1000
  100.  
  101. Epoch 1/1
  102. ----------
  103. train Loss: 0.0551 Acc: 0.1000
  104.  
  105. Training complete in 0m 0s
  106. Batch 1
  107. Epoch 0/1
  108. ----------
  109. train Loss: 0.0587 Acc: 0.1000
  110.  
  111. Epoch 1/1
  112. ----------
  113. train Loss: 0.0584 Acc: 0.1000
  114.  
  115. Training complete in 0m 0s
  116. Batch 2
  117. Epoch 0/1
  118. ----------
  119. train Loss: 0.0788 Acc: 0.0000
  120.  
  121. Epoch 1/1
  122. ----------
  123. train Loss: 0.0788 Acc: 0.0000
  124.  
  125. Training complete in 0m 0s
  126. Batch 3
  127. Epoch 0/1
  128. ----------
  129. train Loss: 0.0809 Acc: 0.0000
  130.  
  131. Epoch 1/1
  132. ----------
  133. train Loss: 0.0809 Acc: 0.0000
  134.  
  135. Training complete in 0m 0s
  136. Batch 4
  137. Epoch 0/1
  138. ----------
  139. train Loss: 0.0822 Acc: 0.0000
  140.  
  141. Epoch 1/1
  142. ----------
  143. train Loss: 0.0822 Acc: 0.0000
  144.  
  145. Training complete in 0m 0s
  146. Batch 5
  147. Epoch 0/1
  148. ----------
  149. train Loss: 0.0813 Acc: 0.0000
  150.  
  151. Epoch 1/1
  152. ----------
  153. train Loss: 0.0813 Acc: 0.0000
  154.  
  155. Training complete in 0m 0s
  156. Batch 6
  157. Epoch 0/1
  158. ----------
  159. train Loss: 0.0832 Acc: 0.0000
  160.  
  161. Epoch 1/1
  162. ----------
  163. train Loss: 0.0832 Acc: 0.0000
  164.  
  165. Training complete in 0m 0s
  166. Batch 7
  167. Epoch 0/1
  168. ----------
  169. train Loss: 0.0577 Acc: 0.1000
  170.  
  171. Epoch 1/1
  172. ----------
  173. train Loss: 0.0577 Acc: 0.1000
  174.  
  175. Training complete in 0m 0s
  176. Batch 8
  177. Epoch 0/1
  178. ----------
  179. train Loss: 0.0594 Acc: 0.1000
  180.  
  181. Epoch 1/1
  182. ----------
  183. train Loss: 0.0594 Acc: 0.1000
  184.  
  185. Training complete in 0m 0s
  186. torch.Size([1, 3, 224, 224])
  187. Using sample 2 as test data
  188. Batch 0
  189. Epoch 0/1
  190. ----------
  191. train Loss: 0.0572 Acc: 0.1000
  192.  
  193. Epoch 1/1
  194. ----------
  195. train Loss: 0.0572 Acc: 0.1000
  196.  
  197. Training complete in 0m 0s
  198. Batch 1
  199. Epoch 0/1
  200. ----------
  201. train Loss: 0.0808 Acc: 0.0000
  202.  
  203. Epoch 1/1
  204. ----------
  205. train Loss: 0.0808 Acc: 0.0000
  206.  
  207. Training complete in 0m 0s
  208. Batch 2
  209. Epoch 0/1
  210. ----------
  211. train Loss: 0.0799 Acc: 0.0000
  212.  
  213. Epoch 1/1
  214. ----------
  215. train Loss: 0.0799 Acc: 0.0000
  216.  
  217. Training complete in 0m 0s
  218. Batch 3
  219. Epoch 0/1
  220. ----------
  221. train Loss: 0.0579 Acc: 0.1000
  222.  
  223. Epoch 1/1
  224. ----------
  225. train Loss: 0.0579 Acc: 0.1000
  226.  
  227. Training complete in 0m 0s
  228. Batch 4
  229. Epoch 0/1
  230. ----------
  231. train Loss: 0.0844 Acc: 0.0000
  232.  
  233. Epoch 1/1
  234. ----------
  235. train Loss: 0.0844 Acc: 0.0000
  236.  
  237. Training complete in 0m 0s
  238. Batch 5
  239. Epoch 0/1
  240. ----------
  241. train Loss: 0.0586 Acc: 0.1000
  242.  
  243. Epoch 1/1
  244. ----------
  245. train Loss: 0.0586 Acc: 0.1000
  246.  
  247. Training complete in 0m 0s
  248. Batch 6
  249. Epoch 0/1
  250. ----------
  251. train Loss: 0.0818 Acc: 0.0000
  252.  
  253. Epoch 1/1
  254. ----------
  255. train Loss: 0.0818 Acc: 0.0000
  256.  
  257. Training complete in 0m 0s
  258. Batch 7
  259. Epoch 0/1
  260. ----------
  261. train Loss: 0.0590 Acc: 0.1000
  262.  
  263. Epoch 1/1
  264. ----------
  265. train Loss: 0.0590 Acc: 0.1000
  266.  
  267. Training complete in 0m 0s
  268. Batch 8
  269. Epoch 0/1
  270. ----------
  271. train Loss: 0.0833 Acc: 0.0000
  272.  
  273. Epoch 1/1
  274. ----------
  275. train Loss: 0.0833 Acc: 0.0000
  276.  
  277. Training complete in 0m 0s
  278. torch.Size([1, 3, 224, 224])
  279. Using sample 3 as test data
  280. Batch 0
  281. Epoch 0/1
  282. ----------
  283. train Loss: 0.0581 Acc: 0.1000
  284.  
  285. Epoch 1/1
  286. ----------
  287. train Loss: 0.0581 Acc: 0.1000
  288.  
  289. Training complete in 0m 0s
  290. Batch 1
  291. Epoch 0/1
  292. ----------
  293. train Loss: 0.0843 Acc: 0.0000
  294.  
  295. Epoch 1/1
  296. ----------
  297. train Loss: 0.0843 Acc: 0.0000
  298.  
  299. Training complete in 0m 0s
  300. Batch 2
  301. Epoch 0/1
  302. ----------
  303. train Loss: 0.0809 Acc: 0.0000
  304.  
  305. Epoch 1/1
  306. ----------
  307. train Loss: 0.0809 Acc: 0.0000
  308.  
  309. Training complete in 0m 0s
  310. Batch 3
  311. Epoch 0/1
  312. ----------
  313. train Loss: 0.0590 Acc: 0.1000
  314.  
  315. Epoch 1/1
  316. ----------
  317. train Loss: 0.0590 Acc: 0.1000
  318.  
  319. Training complete in 0m 0s
  320. Batch 4
  321. Epoch 0/1
  322. ----------
  323. train Loss: 0.0777 Acc: 0.0000
  324.  
  325. Epoch 1/1
  326. ----------
  327. train Loss: 0.0777 Acc: 0.0000
  328.  
  329. Training complete in 0m 0s
  330. Batch 5
  331. Epoch 0/1
  332. ----------
  333. train Loss: 0.0811 Acc: 0.0000
  334.  
  335. Epoch 1/1
  336. ----------
  337. train Loss: 0.0811 Acc: 0.0000
  338.  
  339. Training complete in 0m 0s
  340. Batch 6
  341. Epoch 0/1
  342. ----------
  343. train Loss: 0.0569 Acc: 0.1000
  344.  
  345. Epoch 1/1
  346. ----------
  347. train Loss: 0.0569 Acc: 0.1000
  348.  
  349. Training complete in 0m 0s
  350. Batch 7
  351. Epoch 0/1
  352. ----------
  353. train Loss: 0.0570 Acc: 0.1000
  354.  
  355. Epoch 1/1
  356. ----------
  357. train Loss: 0.0570 Acc: 0.1000
  358.  
  359. Training complete in 0m 0s
  360. Batch 8
  361. Epoch 0/1
  362. ----------
  363. train Loss: 0.0803 Acc: 0.0000
  364.  
  365. Epoch 1/1
  366. ----------
  367. train Loss: 0.0803 Acc: 0.0000
  368.  
  369. Training complete in 0m 0s
  370. torch.Size([1, 3, 224, 224])
  371. Using sample 4 as test data
  372. Batch 0
  373. Epoch 0/1
  374. ----------
  375. train Loss: 0.0588 Acc: 0.1000
  376.  
  377. Epoch 1/1
  378. ----------
  379. train Loss: 0.0588 Acc: 0.1000
  380.  
  381. Training complete in 0m 0s
  382. Batch 1
  383. Epoch 0/1
  384. ----------
  385. train Loss: 0.0833 Acc: 0.0000
  386.  
  387. Epoch 1/1
  388. ----------
  389. train Loss: 0.0833 Acc: 0.0000
  390.  
  391. Training complete in 0m 0s
  392. Batch 2
  393. Epoch 0/1
  394. ----------
  395. train Loss: 0.0609 Acc: 0.1000
  396.  
  397. Epoch 1/1
  398. ----------
  399. train Loss: 0.0609 Acc: 0.1000
  400.  
  401. Training complete in 0m 0s
  402. Batch 3
  403. Epoch 0/1
  404. ----------
  405. train Loss: 0.0815 Acc: 0.0000
  406.  
  407. Epoch 1/1
  408. ----------
  409. train Loss: 0.0815 Acc: 0.0000
  410.  
  411. Training complete in 0m 0s
  412. Batch 4
  413. Epoch 0/1
  414. ----------
  415. train Loss: 0.0812 Acc: 0.0000
  416.  
  417. Epoch 1/1
  418. ----------
  419. train Loss: 0.0812 Acc: 0.0000
  420.  
  421. Training complete in 0m 0s
  422. Batch 5
  423. Epoch 0/1
  424. ----------
  425. train Loss: 0.0826 Acc: 0.0000
  426.  
  427. Epoch 1/1
  428. ----------
  429. train Loss: 0.0826 Acc: 0.0000
  430.  
  431. Training complete in 0m 0s
  432. Batch 6
  433. Epoch 0/1
  434. ----------
  435. train Loss: 0.0558 Acc: 0.1000
  436.  
  437. Epoch 1/1
  438. ----------
  439. train Loss: 0.0558 Acc: 0.1000
  440.  
  441. Training complete in 0m 0s
  442. Batch 7
  443. Epoch 0/1
  444. ----------
  445. train Loss: 0.0843 Acc: 0.0000
  446.  
  447. Epoch 1/1
  448. ----------
  449. train Loss: 0.0843 Acc: 0.0000
  450.  
  451. Training complete in 0m 0s
  452. Batch 8
  453. Epoch 0/1
  454. ----------
  455. train Loss: 0.0563 Acc: 0.1000
  456.  
  457. Epoch 1/1
  458. ----------
  459. train Loss: 0.0563 Acc: 0.1000
  460.  
  461. Training complete in 0m 0s
  462. torch.Size([1, 3, 224, 224])
  463. Using sample 5 as test data
  464. Batch 0
  465. Epoch 0/1
  466. ----------
  467. train Loss: 0.0822 Acc: 0.0000
  468.  
  469. Epoch 1/1
  470. ----------
  471. train Loss: 0.0822 Acc: 0.0000
  472.  
  473. Training complete in 0m 0s
  474. Batch 1
  475. Epoch 0/1
  476. ----------
  477. train Loss: 0.0805 Acc: 0.0000
  478.  
  479. Epoch 1/1
  480. ----------
  481. train Loss: 0.0805 Acc: 0.0000
  482.  
  483. Training complete in 0m 0s
  484. Batch 2
  485. Epoch 0/1
  486. ----------
  487. train Loss: 0.0825 Acc: 0.0000
  488.  
  489. Epoch 1/1
  490. ----------
  491. train Loss: 0.0825 Acc: 0.0000
  492.  
  493. Training complete in 0m 0s
  494. Batch 3
  495. Epoch 0/1
  496. ----------
  497. train Loss: 0.0587 Acc: 0.1000
  498.  
  499. Epoch 1/1
  500. ----------
  501. train Loss: 0.0587 Acc: 0.1000
  502.  
  503. Training complete in 0m 0s
  504. Batch 4
  505. Epoch 0/1
  506. ----------
  507. train Loss: 0.0819 Acc: 0.0000
  508.  
  509. Epoch 1/1
  510. ----------
  511. train Loss: 0.0819 Acc: 0.0000
  512.  
  513. Training complete in 0m 0s
  514. Batch 5
  515. Epoch 0/1
  516. ----------
  517. train Loss: 0.0591 Acc: 0.1000
  518.  
  519. Epoch 1/1
  520. ----------
  521. train Loss: 0.0591 Acc: 0.1000
  522.  
  523. Training complete in 0m 0s
  524. Batch 6
  525. Epoch 0/1
  526. ----------
  527. train Loss: 0.0578 Acc: 0.1000
  528.  
  529. Epoch 1/1
  530. ----------
  531. train Loss: 0.0578 Acc: 0.1000
  532.  
  533. Training complete in 0m 0s
  534. Batch 7
  535. Epoch 0/1
  536. ----------
  537. train Loss: 0.0579 Acc: 0.1000
  538.  
  539. Epoch 1/1
  540. ----------
  541. train Loss: 0.0579 Acc: 0.1000
  542.  
  543. Training complete in 0m 0s
  544. Batch 8
  545. Epoch 0/1
  546. ----------
  547. train Loss: 0.0590 Acc: 0.1000
  548.  
  549. Epoch 1/1
  550. ----------
  551. train Loss: 0.0590 Acc: 0.1000
  552.  
  553. Training complete in 0m 0s
  554. torch.Size([1, 3, 224, 224])
  555. Using sample 6 as test data
  556. Batch 0
  557. Epoch 0/1
  558. ----------
  559. train Loss: 0.0820 Acc: 0.0000
  560.  
  561. Epoch 1/1
  562. ----------
  563. train Loss: 0.0820 Acc: 0.0000
  564.  
  565. Training complete in 0m 0s
  566. Batch 1
  567. Epoch 0/1
  568. ----------
  569. train Loss: 0.0581 Acc: 0.1000
  570.  
  571. Epoch 1/1
  572. ----------
  573. train Loss: 0.0581 Acc: 0.1000
  574.  
  575. Training complete in 0m 0s
  576. Batch 2
  577. Epoch 0/1
  578. ----------
  579. train Loss: 0.0836 Acc: 0.0000
  580.  
  581. Epoch 1/1
  582. ----------
  583. train Loss: 0.0836 Acc: 0.0000
  584.  
  585. Training complete in 0m 0s
  586. Batch 3
  587. Epoch 0/1
  588. ----------
  589. train Loss: 0.0586 Acc: 0.1000
  590.  
  591. Epoch 1/1
  592. ----------
  593. train Loss: 0.0586 Acc: 0.1000
  594.  
  595. Training complete in 0m 0s
  596. Batch 4
  597. Epoch 0/1
  598. ----------
  599. train Loss: 0.0579 Acc: 0.1000
  600.  
  601. Epoch 1/1
  602. ----------
  603. train Loss: 0.0579 Acc: 0.1000
  604.  
  605. Training complete in 0m 0s
  606. Batch 5
  607. Epoch 0/1
  608. ----------
  609. train Loss: 0.0801 Acc: 0.0000
  610.  
  611. Epoch 1/1
  612. ----------
  613. train Loss: 0.0801 Acc: 0.0000
  614.  
  615. Training complete in 0m 0s
  616. Batch 6
  617. Epoch 0/1
  618. ----------
  619. train Loss: 0.0572 Acc: 0.1000
  620.  
  621. Epoch 1/1
  622. ----------
  623. train Loss: 0.0572 Acc: 0.1000
  624.  
  625. Training complete in 0m 0s
  626. Batch 7
  627. Epoch 0/1
  628. ----------
  629. train Loss: 0.0821 Acc: 0.0000
  630.  
  631. Epoch 1/1
  632. ----------
  633. train Loss: 0.0821 Acc: 0.0000
  634.  
  635. Training complete in 0m 0s
  636. Batch 8
  637. Epoch 0/1
  638. ----------
  639. train Loss: 0.0565 Acc: 0.1000
  640.  
  641. Epoch 1/1
  642. ----------
  643. train Loss: 0.0565 Acc: 0.1000
  644.  
  645. Training complete in 0m 0s
  646. torch.Size([1, 3, 224, 224])
  647. Using sample 7 as test data
  648. Batch 0
  649. Epoch 0/1
  650. ----------
  651. train Loss: 0.0801 Acc: 0.0000
  652.  
  653. Epoch 1/1
  654. ----------
  655. train Loss: 0.0801 Acc: 0.0000
  656.  
  657. Training complete in 0m 0s
  658. Batch 1
  659. Epoch 0/1
  660. ----------
  661. train Loss: 0.0594 Acc: 0.1000
  662.  
  663. Epoch 1/1
  664. ----------
  665. train Loss: 0.0594 Acc: 0.1000
  666.  
  667. Training complete in 0m 0s
  668. Batch 2
  669. Epoch 0/1
  670. ----------
  671. train Loss: 0.0563 Acc: 0.1000
  672.  
  673. Epoch 1/1
  674. ----------
  675. train Loss: 0.0563 Acc: 0.1000
  676.  
  677. Training complete in 0m 0s
  678. Batch 3
  679. Epoch 0/1
  680. ----------
  681. train Loss: 0.0587 Acc: 0.1000
  682.  
  683. Epoch 1/1
  684. ----------
  685. train Loss: 0.0587 Acc: 0.1000
  686.  
  687. Training complete in 0m 0s
  688. Batch 4
  689. Epoch 0/1
  690. ----------
  691. train Loss: 0.0801 Acc: 0.0000
  692.  
  693. Epoch 1/1
  694. ----------
  695. train Loss: 0.0801 Acc: 0.0000
  696.  
  697. Training complete in 0m 0s
  698. Batch 5
  699. Epoch 0/1
  700. ----------
  701. train Loss: 0.0822 Acc: 0.0000
  702.  
  703. Epoch 1/1
  704. ----------
  705. train Loss: 0.0822 Acc: 0.0000
  706.  
  707. Training complete in 0m 0s
  708. Batch 6
  709. Epoch 0/1
  710. ----------
  711. train Loss: 0.0588 Acc: 0.1000
  712.  
  713. Epoch 1/1
  714. ----------
  715. train Loss: 0.0588 Acc: 0.1000
  716.  
  717. Training complete in 0m 0s
  718. Batch 7
  719. Epoch 0/1
  720. ----------
  721. train Loss: 0.0806 Acc: 0.0000
  722.  
  723. Epoch 1/1
  724. ----------
  725. train Loss: 0.0806 Acc: 0.0000
  726.  
  727. Training complete in 0m 0s
  728. Batch 8
  729. Epoch 0/1
  730. ----------
  731. train Loss: 0.0590 Acc: 0.1000
  732.  
  733. Epoch 1/1
  734. ----------
  735. train Loss: 0.0590 Acc: 0.1000
  736.  
  737. Training complete in 0m 0s
  738. torch.Size([1, 3, 224, 224])
  739. Using sample 8 as test data
  740. Batch 0
  741. Epoch 0/1
  742. ----------
  743. train Loss: 0.0555 Acc: 0.1000
  744.  
  745. Epoch 1/1
  746. ----------
  747. train Loss: 0.0555 Acc: 0.1000
  748.  
  749. Training complete in 0m 0s
  750. Batch 1
  751. Epoch 0/1
  752. ----------
  753. train Loss: 0.0555 Acc: 0.1000
  754.  
  755. Epoch 1/1
  756. ----------
  757. train Loss: 0.0555 Acc: 0.1000
  758.  
  759. Training complete in 0m 0s
  760. Batch 2
  761. Epoch 0/1
  762. ----------
  763. train Loss: 0.0819 Acc: 0.0000
  764.  
  765. Epoch 1/1
  766. ----------
  767. train Loss: 0.0819 Acc: 0.0000
  768.  
  769. Training complete in 0m 0s
  770. Batch 3
  771. Epoch 0/1
  772. ----------
  773. train Loss: 0.0590 Acc: 0.1000
  774.  
  775. Epoch 1/1
  776. ----------
  777. train Loss: 0.0590 Acc: 0.1000
  778.  
  779. Training complete in 0m 0s
  780. Batch 4
  781. Epoch 0/1
  782. ----------
  783. train Loss: 0.0809 Acc: 0.0000
  784.  
  785. Epoch 1/1
  786. ----------
  787. train Loss: 0.0809 Acc: 0.0000
  788.  
  789. Training complete in 0m 0s
  790. Batch 5
  791. Epoch 0/1
  792. ----------
  793. train Loss: 0.0813 Acc: 0.0000
  794.  
  795. Epoch 1/1
  796. ----------
  797. train Loss: 0.0813 Acc: 0.0000
  798.  
  799. Training complete in 0m 0s
  800. Batch 6
  801. Epoch 0/1
  802. ----------
  803. train Loss: 0.0591 Acc: 0.1000
  804.  
  805. Epoch 1/1
  806. ----------
  807. train Loss: 0.0591 Acc: 0.1000
  808.  
  809. Training complete in 0m 0s
  810. Batch 7
  811. Epoch 0/1
  812. ----------
  813. train Loss: 0.0591 Acc: 0.1000
  814.  
  815. Epoch 1/1
  816. ----------
  817. train Loss: 0.0591 Acc: 0.1000
  818.  
  819. Training complete in 0m 0s
  820. Batch 8
  821. Epoch 0/1
  822. ----------
  823. train Loss: 0.0832 Acc: 0.0000
  824.  
  825. Epoch 1/1
  826. ----------
  827. train Loss: 0.0832 Acc: 0.0000
  828.  
  829. Training complete in 0m 0s
  830. torch.Size([1, 3, 224, 224])
  831. Using sample 9 as test data
  832. Batch 0
  833. Epoch 0/1
  834. ----------
  835. train Loss: 0.0827 Acc: 0.0000
  836.  
  837. Epoch 1/1
  838. ----------
  839. train Loss: 0.0827 Acc: 0.0000
  840.  
  841. Training complete in 0m 0s
  842. Batch 1
  843. Epoch 0/1
  844. ----------
  845. train Loss: 0.0583 Acc: 0.1000
  846.  
  847. Epoch 1/1
  848. ----------
  849. train Loss: 0.0583 Acc: 0.1000
  850.  
  851. Training complete in 0m 0s
  852. Batch 2
  853. Epoch 0/1
  854. ----------
  855. train Loss: 0.0571 Acc: 0.1000
  856.  
  857. Epoch 1/1
  858. ----------
  859. train Loss: 0.0571 Acc: 0.1000
  860.  
  861. Training complete in 0m 0s
  862. Batch 3
  863. Epoch 0/1
  864. ----------
  865. train Loss: 0.0589 Acc: 0.1000
  866.  
  867. Epoch 1/1
  868. ----------
  869. train Loss: 0.0589 Acc: 0.1000
  870.  
  871. Training complete in 0m 0s
  872. Batch 4
  873. Epoch 0/1
  874. ----------
  875. train Loss: 0.0799 Acc: 0.0000
  876.  
  877. Epoch 1/1
  878. ----------
  879. train Loss: 0.0799 Acc: 0.0000
  880.  
  881. Training complete in 0m 0s
  882. Batch 5
  883. Epoch 0/1
  884. ----------
  885. train Loss: 0.0817 Acc: 0.0000
  886.  
  887. Epoch 1/1
  888. ----------
  889. train Loss: 0.0817 Acc: 0.0000
  890.  
  891. Training complete in 0m 0s
  892. Batch 6
  893. Epoch 0/1
  894. ----------
  895. train Loss: 0.0822 Acc: 0.0000
  896.  
  897. Epoch 1/1
  898. ----------
  899. train Loss: 0.0822 Acc: 0.0000
  900.  
  901. Training complete in 0m 0s
  902. Batch 7
  903. Epoch 0/1
  904. ----------
  905. train Loss: 0.0581 Acc: 0.1000
  906.  
  907. Epoch 1/1
  908. ----------
  909. train Loss: 0.0581 Acc: 0.1000
  910.  
  911. Training complete in 0m 0s
  912. Batch 8
  913. Epoch 0/1
  914. ----------
  915. train Loss: 0.0575 Acc: 0.1000
  916.  
  917. Epoch 1/1
  918. ----------
  919. train Loss: 0.0575 Acc: 0.1000
  920.  
  921. Training complete in 0m 0s
  922. torch.Size([1, 3, 224, 224])
  923. [jalal@goku official_tut]$
Add Comment
Please, Sign In to add comment