Guest User

Untitled

a guest
Mar 5th, 2024
43
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.01 KB | None | 0 0
  1. 09:45:10-649698 INFO Loading "CodeBooga-34B-v0.1"
  2. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [03:38<00:00, 27.30s/it]
  3. 09:48:50-186812 INFO LOADER: "Transformers"
  4. 09:48:50-188812 INFO TRUNCATION LENGTH: 16384
  5. 09:48:50-190813 INFO INSTRUCTION TEMPLATE: "Alpaca"
  6. 09:48:50-191813 INFO Loaded the model in 219.54 seconds.
  7. How can I help you today?
  8. L:\OobMarch5Dev\test2\text-generation-webui\installer_files\env\Lib\site-packages\transformers\models\llama\modeling_llama.py:671: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
  9. attn_output = torch.nn.functional.scaled_dot_product_attention(
  10. Output generated in 38.05 seconds (0.53 tokens/s, 20 tokens, context 74, seed 1006641046)
  11. I am doing well, thank you for asking. What can I assist you with?
Advertisement
Add Comment
Please, Sign In to add comment