Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- c:\Repositories\DeepChatModels>python main.py
- Using logfile: C:\Users\felansu\AppData\Local\Temp\tmp7m300t4o
- Hi, I'm a DataHelper. For now, I support helping with the reddit dataset.
- At any prompt, press ENTER if you want the default value.
- Username (default='brandon'): felansu
- Hello, felansu, I've set your data root to C:\Repositories\DeepChatModels\reddit\
- Years to process (as CSVs) (default='2007,2008,2009'): 2007
- These are the files I found:
- ['C:\\Repositories\\DeepChatModels\\reddit\\raw_data\\2007\\RC_2007-10.csv',
- 'C:\\Repositories\\DeepChatModels\\reddit\\raw_data\\2007\\RC_2007-11.csv',
- 'C:\\Repositories\\DeepChatModels\\reddit\\raw_data\\2007\\RC_2007-12.csv']
- Maximum memory to use (in GiB) (default='2.00'):
- Loaded parameters from dicts.json.
- Robot: Please enter a directory for saving checkpoints:
- Human: C:\Repositories\DeepChatModels\
- Robot: Please enter full path to directory containing data:
- Human: C:\Repositories\DeepChatModels\
- ---------- Your non-default parameters: ----------
- model_params:
- ckpt_dir: C:\Repositories\DeepChatModels\
- dataset_params:
- data_dir: C:\Repositories\DeepChatModels\
- --------------------------------------------------
- Setting up data.Cornell dataset.
- Data directory C:\Repositories\DeepChatModels\ does not match dataset name cornell.
- Would you like me to change data_dir to C:\Repositories\DeepChatModels\cornell? [y/n]
- y
- Creating vocabulary C:\Repositories\DeepChatModels\cornell\vocab40000.to from data C:\Repositories\DeepChatModels\cornell/train_to.txt
- processing line 100000
- Creating vocabulary C:\Repositories\DeepChatModels\cornell\vocab40000.from from data C:\Repositories\DeepChatModels\cornell/train_from.txt
- processing line 100000
- Traceback (most recent call last):
- File "main.py", line 97, in <module>
- tf.app.run()
- File "C:\Users\felansu\Anaconda3\envs\tensorflow_p35_tf0121\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
- _sys.exit(main(_sys.argv[:1] + flags_passthrough))
- File "main.py", line 85, in main
- dataset = dataset_class(config['dataset_params'])
- File "c:\Repositories\DeepChatModels\data\dataset_wrappers.py", line 31, in __init__
- super(Cornell, self).__init__(dataset_params)
- File "c:\Repositories\DeepChatModels\data\_dataset.py", line 76, in __init__
- self.vocab_size, self.vocab_size)
- File "c:\Repositories\DeepChatModels\utils\io_utils.py", line 387, in prepare_data
- Popen(['mv', old_to_vocab, to_vocab_path], stdout=PIPE).communicate()
- File "C:\Users\felansu\Anaconda3\envs\tensorflow_p35_tf0121\lib\subprocess.py", line 676, in __init__
- restore_signals, start_new_session)
- File "C:\Users\felansu\Anaconda3\envs\tensorflow_p35_tf0121\lib\subprocess.py", line 955, in _execute_child
- startupinfo)
- FileNotFoundError: [WinError 2] O sistema não pode encontrar o arquivo especificado
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement