Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- /home/choychri/nfs1/projects/MinkowskiNavigation
- Version: 3a7c283837de96d4a352c2dde692896a8e5aafba
- Git diff
- diff --git a/main.py b/main.py
- index bdd566d..d2ad024 100644
- --- a/main.py
- +++ b/main.py
- @@ -18,7 +18,6 @@ if __name__ == '__main__':
- device = torch.device('cuda' if config.use_gpu else 'cpu')
- # actions = [list(a) for a in it.product([0, 1], repeat=n)]
- config.device = device
- - config.log_dir += '/' + time.strftime('%Y-%m-%d %H:%M:%S')
- logging.info('===> Configurations')
- dconfig = vars(config)
- Fri Feb 22 00:31:25 2019
- +-----------------------------------------------------------------------------+
- | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
- |-------------------------------+----------------------+----------------------+
- | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
- | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
- |===============================+======================+======================|
- | 0 GeForce GTX TIT... Off | 00000000:04:00.0 Off | N/A |
- | 23% 63C P2 98W / 250W | 11336MiB / 12212MiB | 100% Default |
- +-------------------------------+----------------------+----------------------+
- | 1 GeForce GTX TIT... Off | 00000000:05:00.0 Off | N/A |
- | 22% 32C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- | 2 GeForce GTX TIT... Off | 00000000:08:00.0 Off | N/A |
- | 22% 29C P8 14W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- | 3 GeForce GTX TIT... Off | 00000000:09:00.0 Off | N/A |
- | 22% 30C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- | 4 GeForce GTX TIT... Off | 00000000:85:00.0 Off | N/A |
- | 22% 34C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- | 5 TITAN X (Pascal) Off | 00000000:86:00.0 Off | N/A |
- | 37% 64C P2 94W / 250W | 12189MiB / 12196MiB | 23% Default |
- +-------------------------------+----------------------+----------------------+
- | 6 GeForce GTX TIT... Off | 00000000:89:00.0 Off | N/A |
- | 22% 30C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- | 7 GeForce GTX TIT... Off | 00000000:8A:00.0 Off | N/A |
- | 22% 33C P8 15W / 250W | 11MiB / 12212MiB | 0% Default |
- +-------------------------------+----------------------+----------------------+
- +-----------------------------------------------------------------------------+
- | Processes: GPU Memory |
- | GPU PID Type Process name Usage |
- |=============================================================================|
- | 0 30469 C python 11325MiB |
- | 5 30350 C python 10621MiB |
- | 5 34309 C python 793MiB |
- | 5 34389 C python 765MiB |
- +-----------------------------------------------------------------------------+
- vcl-gpu2
- /home/choychri/nfs1/anaconda3/bin/conda
- 02/22 00:31:26 ===> Configurations
- 02/22 00:31:26 frame_stack_size: 4
- 02/22 00:31:26 D: 2
- 02/22 00:31:26 use_extra_state: False
- 02/22 00:31:26 in_nchannel: 3
- 02/22 00:31:26 model: ACExampleNet2D90x120
- 02/22 00:31:26 checkpoint: checkpoint.pth
- 02/22 00:31:26 optimizer: SGD
- 02/22 00:31:26 learning_rate: 0.001
- 02/22 00:31:26 batch_size: 64
- 02/22 00:31:26 max_epochs: 100
- 02/22 00:31:26 steps_per_epoch: 2000
- 02/22 00:31:26 step_size: 20000.0
- 02/22 00:31:26 discount_factor: 0.99
- 02/22 00:31:26 weight_decay: 0.0001
- 02/22 00:31:26 bn_momentum: 0.05
- 02/22 00:31:26 log_freq: 20
- 02/22 00:31:26 iter_size: 1
- 02/22 00:31:26 scheduler: StepLR
- 02/22 00:31:26 step_gamma: 0.1
- 02/22 00:31:26 poly_power: 0.9
- 02/22 00:31:26 exp_gamma: 0.99
- 02/22 00:31:26 exp_step_size: 445
- 02/22 00:31:26 sgd_momentum: 0.9
- 02/22 00:31:26 sgd_dampening: 0.1
- 02/22 00:31:26 adam_beta1: 0.9
- 02/22 00:31:26 adam_beta2: 0.999
- 02/22 00:31:26 log_dir: outputs/FixedVizDoomEnv/D2/A2C/1e-3-nenv8-ACExampleNet2D90x120/2019-02-22_00-31-25
- 02/22 00:31:26 data_dir: data
- 02/22 00:31:26 point_lim: -1
- 02/22 00:31:26 use_minos: False
- 02/22 00:31:26 env_args: None
- 02/22 00:31:26 threads: 1
- 02/22 00:31:26 val_threads: 1
- 02/22 00:31:26 replay_memory_size: 10000
- 02/22 00:31:26 vizdoom_scenario: health_gathering_supreme
- 02/22 00:31:26 vizdoom_use_depth: True
- 02/22 00:31:26 vizdoom_scale_reward: True
- 02/22 00:31:26 vizdoom_frame_repeat: 6
- 02/22 00:31:26 trainer: A2C
- 02/22 00:31:26 num_rollout_steps: 10
- 02/22 00:31:26 entropy_coef: 0
- 02/22 00:31:26 value_coef: 1
- 02/22 00:31:26 max_trajectory_len: 96
- 02/22 00:31:26 pg_normalize_rewards: True
- 02/22 00:31:26 ppo_clip_param: 0.2
- 02/22 00:31:26 is_training: True
- 02/22 00:31:26 criterion: MSE
- 02/22 00:31:26 stat_freq: 100
- 02/22 00:31:26 save_freq: 1000
- 02/22 00:31:26 val_freq: 1000
- 02/22 00:31:26 val_episodes: 100
- 02/22 00:31:26 empty_cache_freq: 10
- 02/22 00:31:26 overwrite_weights: True
- 02/22 00:31:26 resume:
- 02/22 00:31:26 resume_optimizer: True
- 02/22 00:31:26 env: VizDoomEnv2D90x120
- 02/22 00:31:26 num_envs: 8
- 02/22 00:31:26 end_eps: 0.1
- 02/22 00:31:26 use_feat_aug: True
- 02/22 00:31:26 data_aug_color_trans_ratio: 0.15
- 02/22 00:31:26 data_aug_color_jitter_std: 0.01
- 02/22 00:31:26 test_phase: test
- 02/22 00:31:26 use_gpu: True
- 02/22 00:31:26 log_step: 50
- 02/22 00:31:26 log_level: INFO
- 02/22 00:31:26 seed: 123
- 02/22 00:31:26 device: cuda
- 02/22 00:31:29 Initializing the network
- Traceback (most recent call last):
- File "main.py", line 28, in <module>
- train(config)
- File "/export/vcl-nfs1-data1/shared/chrischoy/projects/MinkowskiNavigation/lib/train.py", line 44, in train
- model = model.to(config.device)
- File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 381, in to
- return self._apply(convert)
- File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
- module._apply(fn)
- File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
- module._apply(fn)
- File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 193, in _apply
- param.data = fn(param.data)
- File "/home/choychri/nfs1/anaconda3/envs/py3-navigation/lib/python3.7/site-packages/torch/nn/modules/module.py", line 379, in convert
- return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
- RuntimeError: CUDA error: out of memory
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
- 02/22 00:31:29 Closing a vizdoom env
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement