Advertisement
Guest User

Untitled

a guest
Jun 28th, 2018
482
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 0.89 KB | None | 0 0
  1. ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[64,64,128,128] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
  2. [[Node: training_1/Adam/gradients/model_3/conv2d_37/convolution_grad/Conv2DBackpropInput = Conv2DBackpropInput[T=DT_FLOAT, _class=["loc:@model_3/conv2d_37/convolution"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training_1/Adam/gradients/model_3/conv2d_37/convolution_grad/Conv2DBackpropInput-0-VecPermuteNHWCToNCHW-LayoutOptimizer/_1135, conv2d_37/kernel/read, training_1/Adam/gradients/model_3/conv2d_37/Sigmoid_grad/SigmoidGrad)]]
  3. Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement