Advertisement
Anston06

Save GPT Error

Jun 10th, 2023
85
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.46 KB | None | 0 0
  1. "C:\Users\Anston Sorensen\.env\Scripts\python.exe" "C:\Users\Anston Sorensen\PycharmProjects\pythonProject\main.py"
  2. Traceback (most recent call last):
  3. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\transformers\modeling_utils.py", line 442, in load_state_dict
  4. return torch.load(checkpoint_file, map_location="cpu")
  5. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  6. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\torch\serialization.py", line 809, in load
  7. return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  8. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  9. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\torch\serialization.py", line 1172, in _load
  10. result = unpickler.load()
  11. ^^^^^^^^^^^^^^^^
  12. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\torch\serialization.py", line 1142, in persistent_load
  13. typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  14. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  15. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\torch\serialization.py", line 1112, in load_tensor
  16. storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage)._typed_storage()._untyped_storage
  17. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  18. RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 268435456 bytes.
  19.  
  20. During handling of the above exception, another exception occurred:
  21.  
  22. Traceback (most recent call last):
  23. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\transformers\modeling_utils.py", line 446, in load_state_dict
  24. if f.read(7) == "version":
  25. ^^^^^^^^^
  26. File "C:\Users\Anston Sorensen\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 23, in decode
  27. return codecs.charmap_decode(input,self.errors,decoding_table)[0]
  28. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  29. UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1577: character maps to <undefined>
  30.  
  31. During handling of the above exception, another exception occurred:
  32.  
  33. Traceback (most recent call last):
  34. File "C:\Users\Anston Sorensen\PycharmProjects\pythonProject\main.py", line 121, in <module>
  35. save_gpt()
  36. File "C:\Users\Anston Sorensen\PycharmProjects\pythonProject\main.py", line 66, in save_gpt
  37. pt_model = GPTJForQuestionAnswering.from_pretrained('EleutherAI/gpt-j-6b')
  38. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  39. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\transformers\modeling_utils.py", line 2560, in from_pretrained
  40. state_dict = load_state_dict(resolved_archive_file)
  41. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  42. File "C:\Users\Anston Sorensen\.env\Lib\site-packages\transformers\modeling_utils.py", line 458, in load_state_dict
  43. raise OSError(
  44. OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\Anston Sorensen/.cache\huggingface\hub\models--EleutherAI--gpt-j-6b\snapshots\f98c709453c9402b1309b032f40df1c10ad481a2\pytorch_model.bin' at 'C:\Users\Anston Sorensen/.cache\huggingface\hub\models--EleutherAI--gpt-j-6b\snapshots\f98c709453c9402b1309b032f40df1c10ad481a2\pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
  45.  
  46. Process finished with exit code 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement