Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2018-01-27 14:53:01.837268: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 981504 totalling 2.81MiB
- 2018-01-27 14:53:01.837285: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 6 Chunks of size 1048576 totalling 6.00MiB
- 2018-01-27 14:53:01.837302: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1179648 totalling 1.13MiB
- 2018-01-27 14:53:01.837318: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 2097152 totalling 2.00MiB
- 2018-01-27 14:53:01.837335: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 4194304 totalling 4.00MiB
- 2018-01-27 14:53:01.837351: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 6539264 totalling 6.24MiB
- 2018-01-27 14:53:01.837368: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 6755527168 totalling 6.29GiB
- 2018-01-27 14:53:01.837384: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:683] Sum Total of in-use chunks: 6.32GiB
- 2018-01-27 14:53:01.837403: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:685] Stats:
- Limit: 6787871540
- InUse: 6787868928
- MaxInUse: 6787871488
- NumAllocs: 10818
- MaxAllocSize: 6755527168
- 2018-01-27 14:53:01.837496: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:277] *******************************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- 2018-01-27 14:53:01.837520: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\framework\op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[3548,3275,3]
- Traceback (most recent call last):
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
- return fn(*args)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
- status, run_metadata)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
- c_api.TF_GetCode(self.status.status))
- tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized.
- [[Node: FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/gamma/read/_6427 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_15470_FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/gamma/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
- [[Node: Loss/strided_slice_23/_5711 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_9593_Loss/strided_slice_23", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\training\supervisor.py", line 954, in managed_session
- yield sess
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 763, in train
- sess, train_op, global_step, train_step_kwargs)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 487, in train_step
- run_metadata=run_metadata)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
- run_metadata_ptr)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
- feed_dict_tensor, options, run_metadata)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
- options, run_metadata)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
- raise type(e)(node_def, op, message)
- tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized.
- [[Node: FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/gamma/read/_6427 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_15470_FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/gamma/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
- [[Node: Loss/strided_slice_23/_5711 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_9593_Loss/strided_slice_23", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "train.py", line 198, in <module>
- tf.app.run()
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
- _sys.exit(main(_sys.argv[:1] + flags_passthrough))
- File "train.py", line 194, in main
- worker_job_name, is_chief, FLAGS.train_dir)
- File "C:\Users\ITML.LAB\models\object_detection\trainer.py", line 296, in train
- saver=saver)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 775, in train
- sv.stop(threads, close_summary_writer=True)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\contextlib.py", line 77, in __exit__
- self.gen.throw(type, value, traceback)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\training\supervisor.py", line 964, in managed_session
- self.stop(close_summary_writer=close_summary_writer)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\training\supervisor.py", line 792, in stop
- stop_grace_period_secs=self._stop_grace_secs)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\training\coordinator.py", line 389, in join
- six.reraise(*self._exc_info_to_raise)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\six.py", line 693, in reraise
- raise value
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\training\queue_runner_impl.py", line 238, in _run
- enqueue_callable()
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1231, in _single_operation_run
- target_list_as_strings, status, None)
- File "C:\Users\ITML.LAB\AppData\Local\conda\conda\envs\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
- c_api.TF_GetCode(self.status.status))
- tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,1,4802,4802,3]
- [[Node: batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_FLOAT, DT_INT32, DT_INT64, DT_INT32, DT_INT64, DT_INT32, DT_INT64, DT_INT32, DT_BOOL, DT_INT32, DT_BOOL, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_STRING, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch/padding_fifo_queue, batch/n)]]
Advertisement
Add Comment
Please, Sign In to add comment