Advertisement
Guest User

Untitled

a guest
Nov 17th, 2018
225
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 33.28 KB | None | 0 0
  1. Microsoft Windows [Version 10.0.17134.407]
  2. (c) 2018 Microsoft Corporation. All rights reserved.
  3.  
  4. C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master>python train.py
  5. Initializing TensorFlow...
  6. Running train.train_progressive_gan()...
  7. Streaming data using dataset.TFRecordDataset...
  8. self.tfrecord_dir: C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\dataset
  9. Dataset shape = [1, 1024, 1024]
  10. Dynamic range = [0, 255]
  11. Label size = 0
  12. Constructing networks...
  13.  
  14. G Params OutputShape WeightShape
  15. --- --- --- ---
  16. latents_in - (?, 512) -
  17. labels_in - (?, 0) -
  18. lod - () -
  19. 4x4/PixelNorm - (?, 512) -
  20. 4x4/Dense 4194816 (?, 512, 4, 4) (512, 8192)
  21. 4x4/Conv 2359808 (?, 512, 4, 4) (3, 3, 512, 512)
  22. ToRGB_lod8 513 (?, 1, 4, 4) (1, 1, 512, 1)
  23. 8x8/Conv0_up 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
  24. 8x8/Conv1 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
  25. ToRGB_lod7 513 (?, 1, 8, 8) (1, 1, 512, 1)
  26. Upscale2D - (?, 1, 8, 8) -
  27. Grow_lod7 - (?, 1, 8, 8) -
  28. 16x16/Conv0_up 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
  29. 16x16/Conv1 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
  30. ToRGB_lod6 513 (?, 1, 16, 16) (1, 1, 512, 1)
  31. Upscale2D_1 - (?, 1, 16, 16) -
  32. Grow_lod6 - (?, 1, 16, 16) -
  33. 32x32/Conv0_up 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
  34. 32x32/Conv1 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
  35. ToRGB_lod5 513 (?, 1, 32, 32) (1, 1, 512, 1)
  36. Upscale2D_2 - (?, 1, 32, 32) -
  37. Grow_lod5 - (?, 1, 32, 32) -
  38. 64x64/Conv0_up 1179904 (?, 256, 64, 64) (3, 3, 256, 512)
  39. 64x64/Conv1 590080 (?, 256, 64, 64) (3, 3, 256, 256)
  40. ToRGB_lod4 257 (?, 1, 64, 64) (1, 1, 256, 1)
  41. Upscale2D_3 - (?, 1, 64, 64) -
  42. Grow_lod4 - (?, 1, 64, 64) -
  43. 128x128/Conv0_up 295040 (?, 128, 128, 128) (3, 3, 128, 256)
  44. 128x128/Conv1 147584 (?, 128, 128, 128) (3, 3, 128, 128)
  45. ToRGB_lod3 129 (?, 1, 128, 128) (1, 1, 128, 1)
  46. Upscale2D_4 - (?, 1, 128, 128) -
  47. Grow_lod3 - (?, 1, 128, 128) -
  48. 256x256/Conv0_up 73792 (?, 64, 256, 256) (3, 3, 64, 128)
  49. 256x256/Conv1 36928 (?, 64, 256, 256) (3, 3, 64, 64)
  50. ToRGB_lod2 65 (?, 1, 256, 256) (1, 1, 64, 1)
  51. Upscale2D_5 - (?, 1, 256, 256) -
  52. Grow_lod2 - (?, 1, 256, 256) -
  53. 512x512/Conv0_up 18464 (?, 32, 512, 512) (3, 3, 32, 64)
  54. 512x512/Conv1 9248 (?, 32, 512, 512) (3, 3, 32, 32)
  55. ToRGB_lod1 33 (?, 1, 512, 512) (1, 1, 32, 1)
  56. Upscale2D_6 - (?, 1, 512, 512) -
  57. Grow_lod1 - (?, 1, 512, 512) -
  58. 1024x1024/Conv0_up 4624 (?, 16, 1024, 1024) (3, 3, 16, 32)
  59. 1024x1024/Conv1 2320 (?, 16, 1024, 1024) (3, 3, 16, 16)
  60. ToRGB_lod0 17 (?, 1, 1024, 1024) (1, 1, 16, 1)
  61. Upscale2D_7 - (?, 1, 1024, 1024) -
  62. Grow_lod0 - (?, 1, 1024, 1024) -
  63. images_out - (?, 1, 1024, 1024) -
  64. --- --- --- ---
  65. Total 23074009
  66.  
  67.  
  68. D Params OutputShape WeightShape
  69. --- --- --- ---
  70. images_in - (?, 1, 1024, 1024) -
  71. lod - () -
  72. FromRGB_lod0 32 (?, 16, 1024, 1024) (1, 1, 1, 16)
  73. 1024x1024/Conv0 2320 (?, 16, 1024, 1024) (3, 3, 16, 16)
  74. 1024x1024/Conv1_down 4640 (?, 32, 512, 512) (3, 3, 16, 32)
  75. Downscale2D - (?, 1, 512, 512) -
  76. FromRGB_lod1 64 (?, 32, 512, 512) (1, 1, 1, 32)
  77. Grow_lod0 - (?, 32, 512, 512) -
  78. 512x512/Conv0 9248 (?, 32, 512, 512) (3, 3, 32, 32)
  79. 512x512/Conv1_down 18496 (?, 64, 256, 256) (3, 3, 32, 64)
  80. Downscale2D_1 - (?, 1, 256, 256) -
  81. FromRGB_lod2 128 (?, 64, 256, 256) (1, 1, 1, 64)
  82. Grow_lod1 - (?, 64, 256, 256) -
  83. 256x256/Conv0 36928 (?, 64, 256, 256) (3, 3, 64, 64)
  84. 256x256/Conv1_down 73856 (?, 128, 128, 128) (3, 3, 64, 128)
  85. Downscale2D_2 - (?, 1, 128, 128) -
  86. FromRGB_lod3 256 (?, 128, 128, 128) (1, 1, 1, 128)
  87. Grow_lod2 - (?, 128, 128, 128) -
  88. 128x128/Conv0 147584 (?, 128, 128, 128) (3, 3, 128, 128)
  89. 128x128/Conv1_down 295168 (?, 256, 64, 64) (3, 3, 128, 256)
  90. Downscale2D_3 - (?, 1, 64, 64) -
  91. FromRGB_lod4 512 (?, 256, 64, 64) (1, 1, 1, 256)
  92. Grow_lod3 - (?, 256, 64, 64) -
  93. 64x64/Conv0 590080 (?, 256, 64, 64) (3, 3, 256, 256)
  94. 64x64/Conv1_down 1180160 (?, 512, 32, 32) (3, 3, 256, 512)
  95. Downscale2D_4 - (?, 1, 32, 32) -
  96. FromRGB_lod5 1024 (?, 512, 32, 32) (1, 1, 1, 512)
  97. Grow_lod4 - (?, 512, 32, 32) -
  98. 32x32/Conv0 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
  99. 32x32/Conv1_down 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
  100. Downscale2D_5 - (?, 1, 16, 16) -
  101. FromRGB_lod6 1024 (?, 512, 16, 16) (1, 1, 1, 512)
  102. Grow_lod5 - (?, 512, 16, 16) -
  103. 16x16/Conv0 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
  104. 16x16/Conv1_down 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
  105. Downscale2D_6 - (?, 1, 8, 8) -
  106. FromRGB_lod7 1024 (?, 512, 8, 8) (1, 1, 1, 512)
  107. Grow_lod6 - (?, 512, 8, 8) -
  108. 8x8/Conv0 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
  109. 8x8/Conv1_down 2359808 (?, 512, 4, 4) (3, 3, 512, 512)
  110. Downscale2D_7 - (?, 1, 4, 4) -
  111. FromRGB_lod8 1024 (?, 512, 4, 4) (1, 1, 1, 512)
  112. Grow_lod7 - (?, 512, 4, 4) -
  113. 4x4/MinibatchStddev - (?, 1, 4, 4) -
  114. 4x4/Conv 2364416 (?, 512, 4, 4) (3, 3, 513, 512)
  115. 4x4/Dense0 4194816 (?, 512) (8192, 512)
  116. 4x4/Dense1 513 (?, 1) (512, 1)
  117. scores_out - (?, 1) -
  118. labels_out - (?, 0) -
  119. --- --- --- ---
  120. Total 23082161
  121.  
  122. Building TensorFlow graph...
  123. Setting up snapshot image grid...
  124. 2018-11-17 23:14:25.772140: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.25GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
  125. Setting up result dir...
  126. Saving results to results\000-pgan-dragonballz-preset-v2-1gpu-fp32-VERBOSE
  127. Training...
  128. 2018-11-17 23:14:39.776643: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.59GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
  129. 2018-11-17 23:14:39.837302: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.60GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
  130. 2018-11-17 23:14:39.966343: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.60GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
  131. tick 1 kimg 1.0 lod 8.00 minibatch 128 time 15s sec/tick 15.4 sec/kimg 15.07 maintenance 54.5
  132. tick 2 kimg 2.0 lod 8.00 minibatch 128 time 29s sec/tick 3.1 sec/kimg 3.03 maintenance 10.9
  133. tick 3 kimg 3.1 lod 8.00 minibatch 128 time 33s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
  134. tick 4 kimg 4.1 lod 8.00 minibatch 128 time 37s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
  135. tick 5 kimg 5.1 lod 8.00 minibatch 128 time 40s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
  136. tick 6 kimg 6.1 lod 8.00 minibatch 128 time 44s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
  137. tick 7 kimg 7.2 lod 8.00 minibatch 128 time 47s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
  138. tick 8 kimg 8.2 lod 8.00 minibatch 128 time 51s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  139. tick 9 kimg 9.2 lod 8.00 minibatch 128 time 54s sec/tick 3.0 sec/kimg 2.96 maintenance 0.5
  140. tick 10 kimg 10.2 lod 8.00 minibatch 128 time 58s sec/tick 3.1 sec/kimg 3.04 maintenance 0.4
  141. tick 11 kimg 11.3 lod 8.00 minibatch 128 time 1m 01s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  142. tick 12 kimg 12.3 lod 8.00 minibatch 128 time 1m 05s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
  143. tick 13 kimg 13.3 lod 8.00 minibatch 128 time 1m 08s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  144. tick 14 kimg 14.3 lod 8.00 minibatch 128 time 1m 12s sec/tick 3.0 sec/kimg 2.98 maintenance 0.5
  145. tick 15 kimg 15.4 lod 8.00 minibatch 128 time 1m 15s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
  146. tick 16 kimg 16.4 lod 8.00 minibatch 128 time 1m 19s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  147. tick 17 kimg 17.4 lod 8.00 minibatch 128 time 1m 22s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
  148. tick 18 kimg 18.4 lod 8.00 minibatch 128 time 1m 26s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  149. tick 19 kimg 19.5 lod 8.00 minibatch 128 time 1m 29s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  150. tick 20 kimg 20.5 lod 8.00 minibatch 128 time 1m 33s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  151. tick 21 kimg 21.5 lod 8.00 minibatch 128 time 1m 37s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  152. tick 22 kimg 22.5 lod 8.00 minibatch 128 time 1m 40s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  153. tick 23 kimg 23.6 lod 8.00 minibatch 128 time 1m 44s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  154. tick 24 kimg 24.6 lod 8.00 minibatch 128 time 1m 47s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
  155. tick 25 kimg 25.6 lod 8.00 minibatch 128 time 1m 51s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
  156. tick 26 kimg 26.6 lod 8.00 minibatch 128 time 1m 54s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  157. tick 27 kimg 27.6 lod 8.00 minibatch 128 time 1m 58s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
  158. tick 28 kimg 28.7 lod 8.00 minibatch 128 time 2m 01s sec/tick 3.0 sec/kimg 2.97 maintenance 0.4
  159. tick 29 kimg 29.7 lod 8.00 minibatch 128 time 2m 05s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
  160. tick 30 kimg 30.7 lod 8.00 minibatch 128 time 2m 08s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  161. tick 31 kimg 31.7 lod 8.00 minibatch 128 time 2m 12s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  162. tick 32 kimg 32.8 lod 8.00 minibatch 128 time 2m 16s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  163. tick 33 kimg 33.8 lod 8.00 minibatch 128 time 2m 19s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
  164. tick 34 kimg 34.8 lod 8.00 minibatch 128 time 2m 23s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
  165. tick 35 kimg 35.8 lod 8.00 minibatch 128 time 2m 26s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  166. tick 36 kimg 36.9 lod 8.00 minibatch 128 time 2m 30s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  167. tick 37 kimg 37.9 lod 8.00 minibatch 128 time 2m 33s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  168. tick 38 kimg 38.9 lod 8.00 minibatch 128 time 2m 37s sec/tick 3.1 sec/kimg 3.04 maintenance 0.4
  169. tick 39 kimg 39.9 lod 8.00 minibatch 128 time 2m 40s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  170. tick 40 kimg 41.0 lod 8.00 minibatch 128 time 2m 44s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  171. tick 41 kimg 42.0 lod 8.00 minibatch 128 time 2m 47s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  172. tick 42 kimg 43.0 lod 8.00 minibatch 128 time 2m 51s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  173. tick 43 kimg 44.0 lod 8.00 minibatch 128 time 2m 55s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  174. tick 44 kimg 45.1 lod 8.00 minibatch 128 time 2m 58s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  175. tick 45 kimg 46.1 lod 8.00 minibatch 128 time 3m 02s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  176. tick 46 kimg 47.1 lod 8.00 minibatch 128 time 3m 05s sec/tick 3.1 sec/kimg 2.99 maintenance 0.4
  177. tick 47 kimg 48.1 lod 8.00 minibatch 128 time 3m 09s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  178. tick 48 kimg 49.2 lod 8.00 minibatch 128 time 3m 12s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  179. tick 49 kimg 50.2 lod 8.00 minibatch 128 time 3m 16s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  180. tick 50 kimg 51.2 lod 8.00 minibatch 128 time 3m 19s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  181. tick 51 kimg 52.2 lod 8.00 minibatch 128 time 3m 23s sec/tick 3.1 sec/kimg 3.01 maintenance 0.4
  182. tick 52 kimg 53.2 lod 8.00 minibatch 128 time 3m 27s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  183. tick 53 kimg 54.3 lod 8.00 minibatch 128 time 3m 30s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  184. tick 54 kimg 55.3 lod 8.00 minibatch 128 time 3m 34s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  185. tick 55 kimg 56.3 lod 8.00 minibatch 128 time 3m 37s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
  186. tick 56 kimg 57.3 lod 8.00 minibatch 128 time 3m 41s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
  187. tick 57 kimg 58.4 lod 8.00 minibatch 128 time 3m 44s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  188. tick 58 kimg 59.4 lod 8.00 minibatch 128 time 3m 48s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  189. tick 59 kimg 60.4 lod 8.00 minibatch 128 time 3m 52s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  190. tick 60 kimg 61.4 lod 8.00 minibatch 128 time 3m 55s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  191. tick 61 kimg 62.5 lod 8.00 minibatch 128 time 3m 59s sec/tick 3.1 sec/kimg 2.98 maintenance 0.5
  192. tick 62 kimg 63.5 lod 8.00 minibatch 128 time 4m 02s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  193. tick 63 kimg 64.5 lod 8.00 minibatch 128 time 4m 06s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  194. tick 64 kimg 65.5 lod 8.00 minibatch 128 time 4m 09s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  195. tick 65 kimg 66.6 lod 8.00 minibatch 128 time 4m 13s sec/tick 3.1 sec/kimg 3.04 maintenance 0.5
  196. tick 66 kimg 67.6 lod 8.00 minibatch 128 time 4m 16s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  197. tick 67 kimg 68.6 lod 8.00 minibatch 128 time 4m 20s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  198. tick 68 kimg 69.6 lod 8.00 minibatch 128 time 4m 24s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
  199. tick 69 kimg 70.7 lod 8.00 minibatch 128 time 4m 27s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  200. tick 70 kimg 71.7 lod 8.00 minibatch 128 time 4m 31s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  201. tick 71 kimg 72.7 lod 8.00 minibatch 128 time 4m 34s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  202. tick 72 kimg 73.7 lod 8.00 minibatch 128 time 4m 38s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  203. tick 73 kimg 74.8 lod 8.00 minibatch 128 time 4m 41s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  204. tick 74 kimg 75.8 lod 8.00 minibatch 128 time 4m 45s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  205. tick 75 kimg 76.8 lod 8.00 minibatch 128 time 4m 49s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  206. tick 76 kimg 77.8 lod 8.00 minibatch 128 time 4m 52s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
  207. tick 77 kimg 78.8 lod 8.00 minibatch 128 time 4m 56s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  208. tick 78 kimg 79.9 lod 8.00 minibatch 128 time 4m 59s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  209. tick 79 kimg 80.9 lod 8.00 minibatch 128 time 5m 03s sec/tick 3.1 sec/kimg 3.00 maintenance 0.4
  210. tick 80 kimg 81.9 lod 8.00 minibatch 128 time 5m 06s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  211. tick 81 kimg 82.9 lod 8.00 minibatch 128 time 5m 10s sec/tick 3.1 sec/kimg 3.02 maintenance 0.5
  212. tick 82 kimg 84.0 lod 8.00 minibatch 128 time 5m 13s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  213. tick 83 kimg 85.0 lod 8.00 minibatch 128 time 5m 17s sec/tick 3.1 sec/kimg 3.03 maintenance 0.5
  214. tick 84 kimg 86.0 lod 8.00 minibatch 128 time 5m 21s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  215. tick 85 kimg 87.0 lod 8.00 minibatch 128 time 5m 24s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  216. tick 86 kimg 88.1 lod 8.00 minibatch 128 time 5m 28s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  217. tick 87 kimg 89.1 lod 8.00 minibatch 128 time 5m 31s sec/tick 3.2 sec/kimg 3.10 maintenance 0.5
  218. tick 88 kimg 90.1 lod 8.00 minibatch 128 time 5m 35s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  219. tick 89 kimg 91.1 lod 8.00 minibatch 128 time 5m 39s sec/tick 3.1 sec/kimg 3.00 maintenance 0.5
  220. tick 90 kimg 92.2 lod 8.00 minibatch 128 time 5m 42s sec/tick 3.0 sec/kimg 2.97 maintenance 0.5
  221. tick 91 kimg 93.2 lod 8.00 minibatch 128 time 5m 46s sec/tick 3.1 sec/kimg 2.99 maintenance 0.5
  222. tick 92 kimg 94.2 lod 8.00 minibatch 128 time 5m 49s sec/tick 3.1 sec/kimg 3.06 maintenance 0.5
  223. tick 93 kimg 95.2 lod 8.00 minibatch 128 time 5m 53s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  224. tick 94 kimg 96.3 lod 8.00 minibatch 128 time 5m 56s sec/tick 3.1 sec/kimg 3.05 maintenance 0.5
  225. tick 95 kimg 97.3 lod 8.00 minibatch 128 time 6m 00s sec/tick 3.1 sec/kimg 3.01 maintenance 0.5
  226. 2018-11-17 23:20:41.902375: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 512.00MiB. Current allocation summary follows.
  227. 2018-11-17 23:20:41.910137: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ******************************______________*******************_______________********************xx
  228. 2018-11-17 23:20:41.914590: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at pooling_ops_common.cc:270 : Resource exhausted: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
  229. Traceback (most recent call last):
  230. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
  231. return fn(*args)
  232. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
  233. options, feed_dict, fetch_list, target_list, run_metadata)
  234. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
  235. run_metadata)
  236. tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
  237. [[{{node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad}} = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
  238. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
  239.  
  240. [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
  241. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
  242.  
  243.  
  244. During handling of the above exception, another exception occurred:
  245.  
  246. Traceback (most recent call last):
  247. File "train.py", line 285, in <module>
  248. tfutil.call_func_by_name(**config.train)
  249. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
  250. return import_obj(func)(*args, **kwargs)
  251. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\train.py", line 229, in train_progressive_gan
  252. tfutil.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch})
  253. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 21, in run
  254. return tf.get_default_session().run(*args, **kwargs)
  255. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
  256. run_metadata_ptr)
  257. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
  258. feed_dict_tensor, options, run_metadata)
  259. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
  260. run_metadata)
  261. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
  262. raise type(e)(node_def, op, message)
  263. tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
  264. [[node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad (defined at C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py:63) = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
  265. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
  266.  
  267. [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
  268. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
  269.  
  270.  
  271. Caused by op 'GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad', defined at:
  272. File "train.py", line 285, in <module>
  273. tfutil.call_func_by_name(**config.train)
  274. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
  275. return import_obj(func)(*args, **kwargs)
  276. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\train.py", line 188, in train_progressive_gan
  277. D_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals_gpu, labels=labels_gpu, **config.D_loss)
  278. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
  279. return import_obj(func)(*args, **kwargs)
  280. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py", line 63, in D_wgangp_acgan
  281. mixed_grads = opt.undo_loss_scaling(fp32(tf.gradients(mixed_loss, [mixed_images_out])[0]))
  282. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 630, in gradients
  283. gate_gradients, aggregation_method, stop_gradients)
  284. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 814, in _GradientsHelper
  285. lambda: grad_fn(op, *out_grads))
  286. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 408, in _MaybeCompile
  287. return grad_fn() # Exit early
  288. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 814, in <lambda>
  289. lambda: grad_fn(op, *out_grads))
  290. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\nn_grad.py", line 584, in _AvgPoolGrad
  291. data_format=op.get_attr("data_format"))
  292. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 417, in avg_pool_grad
  293. data_format=data_format, name=name)
  294. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
  295. op_def=op_def)
  296. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
  297. return func(*args, **kwargs)
  298. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
  299. op_def=op_def)
  300. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
  301. self._traceback = tf_stack.extract_stack()
  302.  
  303. ...which was originally created as op 'GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool', defined at:
  304. File "train.py", line 285, in <module>
  305. tfutil.call_func_by_name(**config.train)
  306. [elided 2 identical lines from previous traceback]
  307. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 236, in call_func_by_name
  308. return import_obj(func)(*args, **kwargs)
  309. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py", line 60, in D_wgangp_acgan
  310. mixed_scores_out, mixed_labels_out = fp32(D.get_output_for(mixed_images_out, is_training=True))
  311. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\tfutil.py", line 509, in get_output_for
  312. out_expr = self._build_func(*named_inputs, **all_kwargs)
  313. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 308, in D_paper
  314. combo_out = grow(2, resolution_log2 - 2)
  315. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 305, in grow
  316. x = block(x(), res); y = lambda: x
  317. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 17, in <lambda>
  318. def cset(cur_lambda, new_cond, new_lambda): return lambda: tf.cond(new_cond, new_lambda, cur_lambda)
  319. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
  320. return func(*args, **kwargs)
  321. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2097, in cond
  322. orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
  323. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1930, in BuildCondBranch
  324. original_result = fn()
  325. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 303, in <lambda>
  326. x = lambda: fromrgb(downscale2d(images_in, 2**lod), res)
  327. File "C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\networks.py", line 103, in downscale2d
  328. return tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW') # NOTE: requires tf_config['graph_options.place_pruned_graph'] = True
  329. File "C:\Users\Mytino\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2110, in avg_pool
  330. name=name)
  331.  
  332. ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[128,1,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
  333. [[node GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/AvgPoolGrad (defined at C:\Users\Mytino\Documents\Programming\Python\Dragon_Ball_GAN\progressive_growing_of_gans-master\loss.py:63) = AvgPoolGrad[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 256, 256], padding="VALID", strides=[1, 1, 256, 256], _device="/job:localhost/replica:0/task:0/device:GPU:0"](GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/Downscale2D/AvgPool_grad/Shape, GPU0/D_loss/GradientPenalty/gradients/GPU0/D_loss/GradientPenalty/D/cond/FromRGB_lod8/Conv2D_grad/Conv2DBackpropInput)]]
  334. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
  335.  
  336. [[{{node TrainD/ApplyGrads0/UpdateWeights/cond/pred_id/_1849}} = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_37371_TrainD/ApplyGrads0/UpdateWeights/cond/pred_id", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
  337. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement