Advertisement
Guest User

Untitled

a guest
Nov 2nd, 2023
92
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.08 KB | None | 0 0
  1. 2023-11-02 15:48:16 INFO:Loading TheBloke_openchat_3.5-AWQ...
  2. 2023-11-02 15:48:16 WARNING:Auto-assiging --gpu-memory 23 for your GPU to try to prevent out-of-memory errors. You can manually set other values.
  3. Replacing layers...: 100%|█████████████████████████████████████████████████████████████| 32/32 [00:06<00:00, 5.05it/s]
  4. 2023-11-02 15:48:35 INFO:Loaded the model in 18.47 seconds.
  5. Output generated in 26.13 seconds (7.62 tokens/s, 199 tokens, context 205, seed 733479273)
  6. Traceback (most recent call last):
  7. File "E:\text-generation-webui\modules\callbacks.py", line 57, in gentask
  8. ret = self.mfunc(callback=_callback, *args, **self.kwargs)
  9. File "E:\text-generation-webui\modules\text_generation.py", line 352, in generate_with_callback
  10. shared.model.generate(**kwargs)
  11. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\models\base.py", line 36, in generate
  12. return self.model.generate(*args, **kwargs)
  13. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  14. return func(*args, **kwargs)
  15. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 1652, in generate
  16. return self.sample(
  17. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 2734, in sample
  18. outputs = self(
  19. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  20. return forward_call(*args, **kwargs)
  21. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 1045, in forward
  22. outputs = self.model(
  23. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  24. return forward_call(*args, **kwargs)
  25. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 932, in forward
  26. layer_outputs = decoder_layer(
  27. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  28. return forward_call(*args, **kwargs)
  29. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 621, in forward
  30. hidden_states, self_attn_weights, present_key_value = self.self_attn(
  31. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  32. return forward_call(*args, **kwargs)
  33. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\modules\fused\attn.py", line 180, in forward
  34. scores = scores + attention_mask # (bs, n_local_heads, slen, cache_len + slen)
  35. RuntimeError: The size of tensor a (809) must match the size of tensor b (405) at non-singleton dimension 3
  36. Output generated in 2.17 seconds (0.46 tokens/s, 1 tokens, context 404, seed 1560756031)
  37. 2023-11-02 15:50:54 INFO:Loading TheBloke_openchat_3.5-AWQ...
  38. Replacing layers...: 100%|█████████████████████████████████████████████████████████████| 32/32 [00:06<00:00, 5.02it/s]
  39. 2023-11-02 15:51:11 INFO:Loaded the model in 17.56 seconds.
  40. Output generated in 25.10 seconds (7.93 tokens/s, 199 tokens, context 405, seed 1229328071)
  41. Traceback (most recent call last):
  42. File "E:\text-generation-webui\modules\callbacks.py", line 57, in gentask
  43. ret = self.mfunc(callback=_callback, *args, **self.kwargs)
  44. File "E:\text-generation-webui\modules\text_generation.py", line 352, in generate_with_callback
  45. shared.model.generate(**kwargs)
  46. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\models\base.py", line 36, in generate
  47. return self.model.generate(*args, **kwargs)
  48. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  49. return func(*args, **kwargs)
  50. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 1652, in generate
  51. return self.sample(
  52. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 2734, in sample
  53. outputs = self(
  54. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  55. return forward_call(*args, **kwargs)
  56. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 1045, in forward
  57. outputs = self.model(
  58. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  59. return forward_call(*args, **kwargs)
  60. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 932, in forward
  61. layer_outputs = decoder_layer(
  62. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  63. return forward_call(*args, **kwargs)
  64. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 621, in forward
  65. hidden_states, self_attn_weights, present_key_value = self.self_attn(
  66. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  67. return forward_call(*args, **kwargs)
  68. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\modules\fused\attn.py", line 180, in forward
  69. scores = scores + attention_mask # (bs, n_local_heads, slen, cache_len + slen)
  70. RuntimeError: The size of tensor a (1215) must match the size of tensor b (611) at non-singleton dimension 3
  71. Output generated in 1.12 seconds (0.89 tokens/s, 1 tokens, context 610, seed 1077658322)
  72. Traceback (most recent call last):
  73. File "E:\text-generation-webui\modules\callbacks.py", line 57, in gentask
  74. ret = self.mfunc(callback=_callback, *args, **self.kwargs)
  75. File "E:\text-generation-webui\modules\text_generation.py", line 352, in generate_with_callback
  76. shared.model.generate(**kwargs)
  77. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\models\base.py", line 36, in generate
  78. return self.model.generate(*args, **kwargs)
  79. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  80. return func(*args, **kwargs)
  81. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 1652, in generate
  82. return self.sample(
  83. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 2734, in sample
  84. outputs = self(
  85. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  86. return forward_call(*args, **kwargs)
  87. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 1045, in forward
  88. outputs = self.model(
  89. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  90. return forward_call(*args, **kwargs)
  91. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 932, in forward
  92. layer_outputs = decoder_layer(
  93. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  94. return forward_call(*args, **kwargs)
  95. File "E:\text-generation-webui\installer_files\env\lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 621, in forward
  96. hidden_states, self_attn_weights, present_key_value = self.self_attn(
  97. File "E:\text-generation-webui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  98. return forward_call(*args, **kwargs)
  99. File "E:\text-generation-webui\installer_files\env\lib\site-packages\awq\modules\fused\attn.py", line 180, in forward
  100. scores = scores + attention_mask # (bs, n_local_heads, slen, cache_len + slen)
  101. RuntimeError: The size of tensor a (1826) must match the size of tensor b (612) at non-singleton dimension 3
  102. Output generated in 1.14 seconds (0.88 tokens/s, 1 tokens, context 611, seed 595247161)
  103. 2023-11-02 15:52:56 INFO:Loading TheBloke_openchat_3.5-AWQ...
  104. Replacing layers...: 100%|█████████████████████████████████████████████████████████████| 32/32 [00:06<00:00, 4.97it/s]
  105. 2023-11-02 15:53:13 INFO:Loaded the model in 17.58 seconds.
  106. Output generated in 26.07 seconds (7.63 tokens/s, 199 tokens, context 612, seed 1849376669)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement