Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Microsoft Windows [Version 10.0.17134.407]
- (c) 2018 Microsoft Corporation. All rights reserved.
- C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master>python train.py
- Initializing TensorFlow...
- Running train.train_progressive_gan()...
- Streaming data using dataset.TFRecordDataset...
- self.tfrecord_dir: C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\dataset
- Dataset shape = [1, 1024, 1024]
- Dynamic range = [0, 255]
- Label size = 0
- Constructing networks...
- G Params OutputShape WeightShape
- --- --- --- ---
- latents_in - (?, 512) -
- labels_in - (?, 0) -
- lod - () -
- 4x4/PixelNorm - (?, 512) -
- 4x4/Dense 4194816 (?, 512, 4, 4) (512, 8192)
- 4x4/Conv 2359808 (?, 512, 4, 4) (3, 3, 512, 512)
- ToRGB_lod8 513 (?, 1, 4, 4) (1, 1, 512, 1)
- 8x8/Conv0_up 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
- 8x8/Conv1 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
- ToRGB_lod7 513 (?, 1, 8, 8) (1, 1, 512, 1)
- Upscale2D - (?, 1, 8, 8) -
- Grow_lod7 - (?, 1, 8, 8) -
- 16x16/Conv0_up 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
- 16x16/Conv1 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
- ToRGB_lod6 513 (?, 1, 16, 16) (1, 1, 512, 1)
- Upscale2D_1 - (?, 1, 16, 16) -
- Grow_lod6 - (?, 1, 16, 16) -
- 32x32/Conv0_up 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
- 32x32/Conv1 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
- ToRGB_lod5 513 (?, 1, 32, 32) (1, 1, 512, 1)
- Upscale2D_2 - (?, 1, 32, 32) -
- Grow_lod5 - (?, 1, 32, 32) -
- 64x64/Conv0_up 1179904 (?, 256, 64, 64) (3, 3, 256, 512)
- 64x64/Conv1 590080 (?, 256, 64, 64) (3, 3, 256, 256)
- ToRGB_lod4 257 (?, 1, 64, 64) (1, 1, 256, 1)
- Upscale2D_3 - (?, 1, 64, 64) -
- Grow_lod4 - (?, 1, 64, 64) -
- 128x128/Conv0_up 295040 (?, 128, 128, 128) (3, 3, 128, 256)
- 128x128/Conv1 147584 (?, 128, 128, 128) (3, 3, 128, 128)
- ToRGB_lod3 129 (?, 1, 128, 128) (1, 1, 128, 1)
- Upscale2D_4 - (?, 1, 128, 128) -
- Grow_lod3 - (?, 1, 128, 128) -
- 256x256/Conv0_up 73792 (?, 64, 256, 256) (3, 3, 64, 128)
- 256x256/Conv1 36928 (?, 64, 256, 256) (3, 3, 64, 64)
- ToRGB_lod2 65 (?, 1, 256, 256) (1, 1, 64, 1)
- Upscale2D_5 - (?, 1, 256, 256) -
- Grow_lod2 - (?, 1, 256, 256) -
- 512x512/Conv0_up 18464 (?, 32, 512, 512) (3, 3, 32, 64)
- 512x512/Conv1 9248 (?, 32, 512, 512) (3, 3, 32, 32)
- ToRGB_lod1 33 (?, 1, 512, 512) (1, 1, 32, 1)
- Upscale2D_6 - (?, 1, 512, 512) -
- Grow_lod1 - (?, 1, 512, 512) -
- 1024x1024/Conv0_up 4624 (?, 16, 1024, 1024) (3, 3, 16, 32)
- 1024x1024/Conv1 2320 (?, 16, 1024, 1024) (3, 3, 16, 16)
- ToRGB_lod0 17 (?, 1, 1024, 1024) (1, 1, 16, 1)
- Upscale2D_7 - (?, 1, 1024, 1024) -
- Grow_lod0 - (?, 1, 1024, 1024) -
- images_out - (?, 1, 1024, 1024) -
- --- --- --- ---
- Total 23074009
- D Params OutputShape WeightShape
- --- --- --- ---
- images_in - (?, 1, 1024, 1024) -
- lod - () -
- FromRGB_lod0 32 (?, 16, 1024, 1024) (1, 1, 1, 16)
- 1024x1024/Conv0 2320 (?, 16, 1024, 1024) (3, 3, 16, 16)
- 1024x1024/Conv1_down 4640 (?, 32, 512, 512) (3, 3, 16, 32)
- Downscale2D - (?, 1, 512, 512) -
- FromRGB_lod1 64 (?, 32, 512, 512) (1, 1, 1, 32)
- Grow_lod0 - (?, 32, 512, 512) -
- 512x512/Conv0 9248 (?, 32, 512, 512) (3, 3, 32, 32)
- 512x512/Conv1_down 18496 (?, 64, 256, 256) (3, 3, 32, 64)
- Downscale2D_1 - (?, 1, 256, 256) -
- FromRGB_lod2 128 (?, 64, 256, 256) (1, 1, 1, 64)
- Grow_lod1 - (?, 64, 256, 256) -
- 256x256/Conv0 36928 (?, 64, 256, 256) (3, 3, 64, 64)
- 256x256/Conv1_down 73856 (?, 128, 128, 128) (3, 3, 64, 128)
- Downscale2D_2 - (?, 1, 128, 128) -
- FromRGB_lod3 256 (?, 128, 128, 128) (1, 1, 1, 128)
- Grow_lod2 - (?, 128, 128, 128) -
- 128x128/Conv0 147584 (?, 128, 128, 128) (3, 3, 128, 128)
- 128x128/Conv1_down 295168 (?, 256, 64, 64) (3, 3, 128, 256)
- Downscale2D_3 - (?, 1, 64, 64) -
- FromRGB_lod4 512 (?, 256, 64, 64) (1, 1, 1, 256)
- Grow_lod3 - (?, 256, 64, 64) -
- 64x64/Conv0 590080 (?, 256, 64, 64) (3, 3, 256, 256)
- 64x64/Conv1_down 1180160 (?, 512, 32, 32) (3, 3, 256, 512)
- Downscale2D_4 - (?, 1, 32, 32) -
- FromRGB_lod5 1024 (?, 512, 32, 32) (1, 1, 1, 512)
- Grow_lod4 - (?, 512, 32, 32) -
- 32x32/Conv0 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
- 32x32/Conv1_down 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
- Downscale2D_5 - (?, 1, 16, 16) -
- FromRGB_lod6 1024 (?, 512, 16, 16) (1, 1, 1, 512)
- Grow_lod5 - (?, 512, 16, 16) -
- 16x16/Conv0 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
- 16x16/Conv1_down 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
- Downscale2D_6 - (?, 1, 8, 8) -
- FromRGB_lod7 1024 (?, 512, 8, 8) (1, 1, 1, 512)
- Grow_lod6 - (?, 512, 8, 8) -
- 8x8/Conv0 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
- 8x8/Conv1_down 2359808 (?, 512, 4, 4) (3, 3, 512, 512)
- Downscale2D_7 - (?, 1, 4, 4) -
- FromRGB_lod8 1024 (?, 512, 4, 4) (1, 1, 1, 512)
- Grow_lod7 - (?, 512, 4, 4) -
- 4x4/MinibatchStddev - (?, 1, 4, 4) -
- 4x4/Conv 2364416 (?, 512, 4, 4) (3, 3, 513, 512)
- 4x4/Dense0 4194816 (?, 512) (8192, 512)
- 4x4/Dense1 513 (?, 1) (512, 1)
- scores_out - (?, 1) -
- labels_out - (?, 0) -
- --- --- --- ---
- Total 23082161
- Building TensorFlow graph...
- Setting up snapshot image grid...
- 2018-11-17 23:14:25.772140: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.25GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
- Setting up result dir...
- Saving results to results\000-pgan-dragonballz-preset-v2-1gpu-fp32-VERBOSE
- Training...
- 2018-11-17 23:14:39.776643: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.59GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
- 2018-11-17 23:14:39.837302: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.60GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
- 2018-11-17 23:14:39.966343: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.60GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
- tick 1 kimg 1.0 lod 8.00 minibatch 128 time 15s sec/tick 15.4 sec/kimg 15.07 maintenance 54.5
- tick 2 kimg 2.0 lod 8.00 minibatch 128 time 29s sec/tick 3.1 sec/kimg 3.03 maintenance 10.9
- tick 3 kimg 3.1 lod 8.00 minibatch 128 time 33s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
- tick 4 kimg 4.1 lod 8.00 minibatch 128 time 37s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
- tick 5 kimg 5.1 lod 8.00 minibatch 128 time 40s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
- tick 6 kimg 6.1 lod 8.00 minibatch 128 time 44s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
- tick 7 kimg 7.2 lod 8.00 minibatch 128 time 47s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
- tick 8 kimg 8.2 lod 8.00 minibatch 128 time 51s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 9 kimg 9.2 lod 8.00 minibatch 128 time 54s sec/tick 3.0 sec/kimg 2.96 maintenance 0.5
- tick 10 kimg 10.2 lod 8.00 minibatch 128 time 58s sec/tick 3.1 sec/kimg 3.04 maintenance 0.4
- tick 11 kimg 11.3 lod 8.00 minibatch 128 time 1m 01s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 12 kimg 12.3 lod 8.00 minibatch 128 time 1m 05s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
- tick 13 kimg 13.3 lod 8.00 minibatch 128 time 1m 08s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 14 kimg 14.3 lod 8.00 minibatch 128 time 1m 12s sec/tick 3.0 sec/kimg 2.98 maintenance 0.5
- tick 15 kimg 15.4 lod 8.00 minibatch 128 time 1m 15s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
- tick 16 kimg 16.4 lod 8.00 minibatch 128 time 1m 19s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 17 kimg 17.4 lod 8.00 minibatch 128 time 1m 22s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
- tick 18 kimg 18.4 lod 8.00 minibatch 128 time 1m 26s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 19 kimg 19.5 lod 8.00 minibatch 128 time 1m 29s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 20 kimg 20.5 lod 8.00 minibatch 128 time 1m 33s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 21 kimg 21.5 lod 8.00 minibatch 128 time 1m 37s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 22 kimg 22.5 lod 8.00 minibatch 128 time 1m 40s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 23 kimg 23.6 lod 8.00 minibatch 128 time 1m 44s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 24 kimg 24.6 lod 8.00 minibatch 128 time 1m 47s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
- tick 25 kimg 25.6 lod 8.00 minibatch 128 time 1m 51s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
- tick 26 kimg 26.6 lod 8.00 minibatch 128 time 1m 54s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 27 kimg 27.6 lod 8.00 minibatch 128 time 1m 58s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
- tick 28 kimg 28.7 lod 8.00 minibatch 128 time 2m 01s sec/tick 3.0 sec/kimg 2.97 maintenance 0.4
- tick 29 kimg 29.7 lod 8.00 minibatch 128 time 2m 05s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
- tick 30 kimg 30.7 lod 8.00 minibatch 128 time 2m 08s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 31 kimg 31.7 lod 8.00 minibatch 128 time 2m 12s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 32 kimg 32.8 lod 8.00 minibatch 128 time 2m 16s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 33 kimg 33.8 lod 8.00 minibatch 128 time 2m 19s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
- tick 34 kimg 34.8 lod 8.00 minibatch 128 time 2m 23s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
- tick 35 kimg 35.8 lod 8.00 minibatch 128 time 2m 26s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 36 kimg 36.9 lod 8.00 minibatch 128 time 2m 30s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 37 kimg 37.9 lod 8.00 minibatch 128 time 2m 33s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 38 kimg 38.9 lod 8.00 minibatch 128 time 2m 37s sec/tick 3.1 sec/kimg 3.04 maintenance 0.4
- tick 39 kimg 39.9 lod 8.00 minibatch 128 time 2m 40s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 40 kimg 41.0 lod 8.00 minibatch 128 time 2m 44s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 41 kimg 42.0 lod 8.00 minibatch 128 time 2m 47s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 42 kimg 43.0 lod 8.00 minibatch 128 time 2m 51s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 43 kimg 44.0 lod 8.00 minibatch 128 time 2m 55s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 44 kimg 45.1 lod 8.00 minibatch 128 time 2m 58s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 45 kimg 46.1 lod 8.00 minibatch 128 time 3m 02s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 46 kimg 47.1 lod 8.00 minibatch 128 time 3m 05s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
- tick 47 kimg 48.1 lod 8.00 minibatch 128 time 3m 09s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 48 kimg 49.2 lod 8.00 minibatch 128 time 3m 12s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 49 kimg 50.2 lod 8.00 minibatch 128 time 3m 16s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 50 kimg 51.2 lod 8.00 minibatch 128 time 3m 19s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 51 kimg 52.2 lod 8.00 minibatch 128 time 3m 23s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
- tick 52 kimg 53.2 lod 8.00 minibatch 128 time 3m 27s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 53 kimg 54.3 lod 8.00 minibatch 128 time 3m 30s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 54 kimg 55.3 lod 8.00 minibatch 128 time 3m 34s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 55 kimg 56.3 lod 8.00 minibatch 128 time 3m 37s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
- tick 56 kimg 57.3 lod 8.00 minibatch 128 time 3m 41s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
- tick 57 kimg 58.4 lod 8.00 minibatch 128 time 3m 44s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 58 kimg 59.4 lod 8.00 minibatch 128 time 3m 48s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 59 kimg 60.4 lod 8.00 minibatch 128 time 3m 52s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 60 kimg 61.4 lod 8.00 minibatch 128 time 3m 55s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 61 kimg 62.5 lod 8.00 minibatch 128 time 3m 59s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
- tick 62 kimg 63.5 lod 8.00 minibatch 128 time 4m 02s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 63 kimg 64.5 lod 8.00 minibatch 128 time 4m 06s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 64 kimg 65.5 lod 8.00 minibatch 128 time 4m 09s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 65 kimg 66.6 lod 8.00 minibatch 128 time 4m 13s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
- tick 66 kimg 67.6 lod 8.00 minibatch 128 time 4m 16s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 67 kimg 68.6 lod 8.00 minibatch 128 time 4m 20s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 68 kimg 69.6 lod 8.00 minibatch 128 time 4m 24s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
- tick 69 kimg 70.7 lod 8.00 minibatch 128 time 4m 27s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 70 kimg 71.7 lod 8.00 minibatch 128 time 4m 31s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 71 kimg 72.7 lod 8.00 minibatch 128 time 4m 34s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 72 kimg 73.7 lod 8.00 minibatch 128 time 4m 38s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 73 kimg 74.8 lod 8.00 minibatch 128 time 4m 41s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 74 kimg 75.8 lod 8.00 minibatch 128 time 4m 45s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 75 kimg 76.8 lod 8.00 minibatch 128 time 4m 49s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 76 kimg 77.8 lod 8.00 minibatch 128 time 4m 52s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
- tick 77 kimg 78.8 lod 8.00 minibatch 128 time 4m 56s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 78 kimg 79.9 lod 8.00 minibatch 128 time 4m 59s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 79 kimg 80.9 lod 8.00 minibatch 128 time 5m 03s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
- tick 80 kimg 81.9 lod 8.00 minibatch 128 time 5m 06s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 81 kimg 82.9 lod 8.00 minibatch 128 time 5m 10s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
- tick 82 kimg 84.0 lod 8.00 minibatch 128 time 5m 13s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 83 kimg 85.0 lod 8.00 minibatch 128 time 5m 17s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
- tick 84 kimg 86.0 lod 8.00 minibatch 128 time 5m 21s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 85 kimg 87.0 lod 8.00 minibatch 128 time 5m 24s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 86 kimg 88.1 lod 8.00 minibatch 128 time 5m 28s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 87 kimg 89.1 lod 8.00 minibatch 128 time 5m 31s sec/tick 3.2 sec/kimg 3.10 maintenance 0.5
- tick 88 kimg 90.1 lod 8.00 minibatch 128 time 5m 35s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 89 kimg 91.1 lod 8.00 minibatch 128 time 5m 39s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
- tick 90 kimg 92.2 lod 8.00 minibatch 128 time 5m 42s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
- tick 91 kimg 93.2 lod 8.00 minibatch 128 time 5m 46s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
- tick 92 kimg 94.2 lod 8.00 minibatch 128 time 5m 49s sec/tick 3.1 sec/kimg 3.06 maintenance 0.5
- tick 93 kimg 95.2 lod 8.00 minibatch 128 time 5m 53s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- tick 94 kimg 96.3 lod 8.00 minibatch 128 time 5m 56s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
- tick 95 kimg 97.3 lod 8.00 minibatch 128 time 6m 00s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
- 2018-11-17 23:20:41.902375: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 512.00MiB. Current allocation summary follows.
- 2018-11-17 23:20:41.910137: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ******************************______________*******************_______________********************xx
- 2018-11-17 23:20:41.914590: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at pooling_ops_common.cc:270 : Resource exhausted: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
- Traceback (most recent call last):
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
- return fn(*args)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
- options, feed_dict, fetch_list, target_list, run_metadata)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
- run_metadata)
- tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
- [[{{node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad}} = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
- [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "train.py", line 285, in <module>
- tfutil.call_func_by_name(**config.train)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
- return import_obj(func)(*args, **kwargs)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\train.py", line 229, in train_progressive_gan
- tfutil.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch})
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 21, in run
- return tf.get_default_session().run(*args, **kwargs)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
- run_metadata_ptr)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
- feed_dict_tensor, options, run_metadata)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
- run_metadata)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
- raise type(e)(node_def, op, message)
- tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
- [[node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad (defined at C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py:63) = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
- [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
- Caused by op 'GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad', defined at:
- File "train.py", line 285, in <module>
- tfutil.call_func_by_name(**config.train)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
- return import_obj(func)(*args, **kwargs)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\train.py", line 188, in train_progressive_gan
- D_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals_gpu, labels=labels_gpu, **config.D_loss)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
- return import_obj(func)(*args, **kwargs)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py", line 63, in D_wgangp_acgan
- mixed_grads = opt.undo_loss_scaling(fp32(tf.gradients(mixed_loss, [mixed_images_out])[0]))
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 630, in gradients
- gate_gradients, aggregation_method, stop_gradients)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 814, in _GradientsHelper
- lambda: grad_fn(op, *out_grads))
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 408, in _MaybeCompile
- return grad_fn() # Exit early
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 814, in <lambda>
- lambda: grad_fn(op, *out_grads))
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\nn_grad.py", line 584, in _AvgPoolGrad
- data_format=op.get_attr("data_format"))
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 417, in avg_pool_grad
- data_format=data_format, name=name)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
- op_def=op_def)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
- return func(*args, **kwargs)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
- op_def=op_def)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
- self._traceback = tf_stack.extract_stack()
- ...which was originally created as op 'GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool', defined at:
- File "train.py", line 285, in <module>
- tfutil.call_func_by_name(**config.train)
- [elided 2 identical lines from previous traceback]
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
- return import_obj(func)(*args, **kwargs)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py", line 60, in D_wgangp_acgan
- mixed_scores_out, mixed_labels_out = fp32(D.get_output_for(mixed_images_out, is_training=True))
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 509, in get_output_for
- out_expr = self._build_func(*named_inputs, **all_kwargs)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 308, in D_paper
- combo_out = grow(2, resolution_log2 - 2)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 305, in grow
- x = block(x(), res); y = lambda: x
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 17, in <lambda>
- def cset(cur_lambda, new_cond, new_lambda): return lambda: tf.cond(new_cond, new_lambda, cur_lambda)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
- return func(*args, **kwargs)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2097, in cond
- orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1930, in BuildCondBranch
- original_result = fn()
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 303, in <lambda>
- x = lambda: fromrgb(downscale2d(images_in, 2**lod), res)
- File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 103, in downscale2d
- return tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW') # NOTE: requires tf_config['graph_options.place_pruned_graph'] = True
- File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2110, in avg_pool
- name=name)
- ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
- [[node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad (defined at C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py:63) = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
- [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
- Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement