Advertisement
Guest User

Untitled

a guest
Feb 22nd, 2019
82
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.87 KB | None | 0 0
  1. /home/choychri/nfs1/projects/MinkowskiNavigation
  2. Version: 3a7c283837de96d4a352c2dde692896a8e5aafba
  3. Git diff
  4.  
  5. diff --git a/main.py b/main.py
  6. index bdd566d..d2ad024 100644
  7. --- a/main.py
  8. +++ b/main.py
  9. @@ -18,7 +18,6 @@ if __name__ == '__main__':
  10. device = torch.device('cuda' if config.use_gpu else 'cpu')
  11. # actions = [list(a) for a in it.product([0, 1], repeat=n)]
  12. config.device = device
  13. - config.log_dir += '/' + time.strftime('%Y-%m-%d %H:%M:%S')
  14.  
  15. logging.info('===> Configurations')
  16. dconfig = vars(config)
  17.  
  18. Fri Feb 22 00:31:25 2019
  19. +-----------------------------------------------------------------------------+
  20. | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
  21. |-------------------------------+----------------------+----------------------+
  22. | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
  23. | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
  24. |===============================+======================+======================|
  25. | 0 GeForce GTX TIT... Off | 00000000:04:00.0 Off | N/A |
  26. | 23% 63C P2 98W / 250W | 11336MiB / 12212MiB | 100% Default |
  27. +-------------------------------+----------------------+----------------------+
  28. | 1 GeForce GTX TIT... Off | 00000000:05:00.0 Off | N/A |
  29. | 22% 32C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
  30. +-------------------------------+----------------------+----------------------+
  31. | 2 GeForce GTX TIT... Off | 00000000:08:00.0 Off | N/A |
  32. | 22% 29C P8 14W / 250W | 11MiB / 12212MiB | 0% Default |
  33. +-------------------------------+----------------------+----------------------+
  34. | 3 GeForce GTX TIT... Off | 00000000:09:00.0 Off | N/A |
  35. | 22% 30C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
  36. +-------------------------------+----------------------+----------------------+
  37. | 4 GeForce GTX TIT... Off | 00000000:85:00.0 Off | N/A |
  38. | 22% 34C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
  39. +-------------------------------+----------------------+----------------------+
  40. | 5 TITAN X (Pascal) Off | 00000000:86:00.0 Off | N/A |
  41. | 37% 64C P2 94W / 250W | 12189MiB / 12196MiB | 23% Default |
  42. +-------------------------------+----------------------+----------------------+
  43. | 6 GeForce GTX TIT... Off | 00000000:89:00.0 Off | N/A |
  44. | 22% 30C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
  45. +-------------------------------+----------------------+----------------------+
  46. | 7 GeForce GTX TIT... Off | 00000000:8A:00.0 Off | N/A |
  47. | 22% 33C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
  48. +-------------------------------+----------------------+----------------------+
  49.  
  50. +-----------------------------------------------------------------------------+
  51. | Processes: GPU Memory |
  52. | GPU PID Type Process name Usage |
  53. |=============================================================================|
  54. | 0 30469 C python 11325MiB |
  55. | 5 30350 C python 10621MiB |
  56. | 5 34309 C python 793MiB |
  57. | 5 34389 C python 765MiB |
  58. +-----------------------------------------------------------------------------+
  59. vcl-gpu2
  60. /home/choychri/nfs1/anaconda3/bin/conda
  61. 02/22 00:31:26 ===> Configurations
  62. 02/22 00:31:26 frame_stack_size: 4
  63. 02/22 00:31:26 D: 2
  64. 02/22 00:31:26 use_extra_state: False
  65. 02/22 00:31:26 in_nchannel: 3
  66. 02/22 00:31:26 model: ACExampleNet2D90x120
  67. 02/22 00:31:26 checkpoint: checkpoint.pth
  68. 02/22 00:31:26 optimizer: SGD
  69. 02/22 00:31:26 learning_rate: 0.001
  70. 02/22 00:31:26 batch_size: 64
  71. 02/22 00:31:26 max_epochs: 100
  72. 02/22 00:31:26 steps_per_epoch: 2000
  73. 02/22 00:31:26 step_size: 20000.0
  74. 02/22 00:31:26 discount_factor: 0.99
  75. 02/22 00:31:26 weight_decay: 0.0001
  76. 02/22 00:31:26 bn_momentum: 0.05
  77. 02/22 00:31:26 log_freq: 20
  78. 02/22 00:31:26 iter_size: 1
  79. 02/22 00:31:26 scheduler: StepLR
  80. 02/22 00:31:26 step_gamma: 0.1
  81. 02/22 00:31:26 poly_power: 0.9
  82. 02/22 00:31:26 exp_gamma: 0.99
  83. 02/22 00:31:26 exp_step_size: 445
  84. 02/22 00:31:26 sgd_momentum: 0.9
  85. 02/22 00:31:26 sgd_dampening: 0.1
  86. 02/22 00:31:26 adam_beta1: 0.9
  87. 02/22 00:31:26 adam_beta2: 0.999
  88. 02/22 00:31:26 log_dir: outputs/FixedVizDoomEnv/D2/A2C/1e-3-nenv8-ACExampleNet2D90x120/2019-02-22_00-31-25
  89. 02/22 00:31:26 data_dir: data
  90. 02/22 00:31:26 point_lim: -1
  91. 02/22 00:31:26 use_minos: False
  92. 02/22 00:31:26 env_args: None
  93. 02/22 00:31:26 threads: 1
  94. 02/22 00:31:26 val_threads: 1
  95. 02/22 00:31:26 replay_memory_size: 10000
  96. 02/22 00:31:26 vizdoom_scenario: health_gathering_supreme
  97. 02/22 00:31:26 vizdoom_use_depth: True
  98. 02/22 00:31:26 vizdoom_scale_reward: True
  99. 02/22 00:31:26 vizdoom_frame_repeat: 6
  100. 02/22 00:31:26 trainer: A2C
  101. 02/22 00:31:26 num_rollout_steps: 10
  102. 02/22 00:31:26 entropy_coef: 0
  103. 02/22 00:31:26 value_coef: 1
  104. 02/22 00:31:26 max_trajectory_len: 96
  105. 02/22 00:31:26 pg_normalize_rewards: True
  106. 02/22 00:31:26 ppo_clip_param: 0.2
  107. 02/22 00:31:26 is_training: True
  108. 02/22 00:31:26 criterion: MSE
  109. 02/22 00:31:26 stat_freq: 100
  110. 02/22 00:31:26 save_freq: 1000
  111. 02/22 00:31:26 val_freq: 1000
  112. 02/22 00:31:26 val_episodes: 100
  113. 02/22 00:31:26 empty_cache_freq: 10
  114. 02/22 00:31:26 overwrite_weights: True
  115. 02/22 00:31:26 resume:
  116. 02/22 00:31:26 resume_optimizer: True
  117. 02/22 00:31:26 env: VizDoomEnv2D90x120
  118. 02/22 00:31:26 num_envs: 8
  119. 02/22 00:31:26 end_eps: 0.1
  120. 02/22 00:31:26 use_feat_aug: True
  121. 02/22 00:31:26 data_aug_color_trans_ratio: 0.15
  122. 02/22 00:31:26 data_aug_color_jitter_std: 0.01
  123. 02/22 00:31:26 test_phase: test
  124. 02/22 00:31:26 use_gpu: True
  125. 02/22 00:31:26 log_step: 50
  126. 02/22 00:31:26 log_level: INFO
  127. 02/22 00:31:26 seed: 123
  128. 02/22 00:31:26 device: cuda
  129. 02/22 00:31:29 Initializing the network
  130. Traceback (most recent call last):
  131. File "main.py", line 28, in <module>
  132. train(config)
  133. File "/export/vcl-nfs1-data1/shared/chrischoy/projects/MinkowskiNavigation/lib/train.py", line 44, in train
  134. model = model.to(config.device)
  135. File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 381, in to
  136. return self._apply(convert)
  137. File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
  138. module._apply(fn)
  139. File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
  140. module._apply(fn)
  141. File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 193, in _apply
  142. param.data = fn(param.data)
  143. File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 379, in convert
  144. return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
  145. RuntimeError: CUDA error: out of memory
  146. 02/22 00:31:29 Closing a vizdoom env
  147. 02/22 00:31:29 Closing a vizdoom env
  148. 02/22 00:31:29 Closing a vizdoom env
  149. 02/22 00:31:29 Closing a vizdoom env
  150. 02/22 00:31:29 Closing a vizdoom env
  151. 02/22 00:31:29 Closing a vizdoom env
  152. 02/22 00:31:29 Closing a vizdoom env
  153. 02/22 00:31:29 Closing a vizdoom env
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement