Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Found GPU: GeForce GTX 970
- Model Initialized
- Found GPU: GeForce GTX 970
- Model Initialized
- Traceback (most recent call last):
- File "<string>", line 1, in <module>
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
- exitcode = _main(fd)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 114, in _main
- prepare(preparation_data)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 225, in prepare
- _fixup_main_from_path(data['init_main_from_path'])
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
- run_name="__mp_main__")
- File "C:\Users\Andrew\Miniconda3\lib\runpy.py", line 263, in run_path
- pkg_name=pkg_name, script_name=fname)
- File "C:\Users\Andrew\Miniconda3\lib\runpy.py", line 96, in _run_module_code
- mod_name, mod_spec, pkg_name, script_name)
- File "C:\Users\Andrew\Miniconda3\lib\runpy.py", line 85, in _run_code
- exec(code, run_globals)
- File "C:\Users\Andrew\PycharmProjects\Seq2Seq\main.py", line 35, in <module>
- word2vec_model.model(mode=1)
- File "C:\Users\Andrew\PycharmProjects\Seq2Seq\neural_net\word2vec\model_pytorch.py", line 48, in model
- for i_batch, sample_batched in enumerate(data_loader):
- File "C:\Users\Andrew\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
- return _DataLoaderIter(self)
- File "C:\Users\Andrew\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
- w.start()
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\process.py", line 105, in start
- self._popen = self._Popen(self)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\context.py", line 223, in _Popen
- return _default_context.get_context().Process._Popen(process_obj)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\context.py", line 322, in _Popen
- return Popen(process_obj)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
- prep_data = spawn.get_preparation_data(process_obj._name)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
- _check_not_importing_main()
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
- is not going to be frozen to produce an executable.''')
- RuntimeError:
- An attempt has been made to start a new process before the
- current process has finished its bootstrapping phase.
- This probably means that you are not using fork to start your
- child processes and you have forgotten to use the proper idiom
- in the main module:
- if __name__ == '__main__':
- freeze_support()
- ...
- The "freeze_support()" line can be omitted if the program
- is not going to be frozen to produce an executable.
- Traceback (most recent call last):
- File "C:/Users/Andrew/PycharmProjects/Seq2Seq/main.py", line 35, in <module>
- word2vec_model.model(mode=1)
- File "C:\Users\Andrew\PycharmProjects\Seq2Seq\neural_net\word2vec\model_pytorch.py", line 48, in model
- for i_batch, sample_batched in enumerate(data_loader):
- File "C:\Users\Andrew\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
- return _DataLoaderIter(self)
- File "C:\Users\Andrew\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
- w.start()
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\process.py", line 105, in start
- self._popen = self._Popen(self)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\context.py", line 223, in _Popen
- return _default_context.get_context().Process._Popen(process_obj)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\context.py", line 322, in _Popen
- return Popen(process_obj)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
- reduction.dump(process_obj, to_child)
- File "C:\Users\Andrew\Miniconda3\lib\multiprocessing\reduction.py", line 60, in dump
- ForkingPickler(file, protocol).dump(obj)
- BrokenPipeError: [Errno 32] Broken pipe
- Process finished with exit code 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement