Advertisement
Guest User

Untitled

a guest
Jul 27th, 2023
73
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.03 KB | None | 0 0
  1. /Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
  2. warnings.warn(
  3. /Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/transformers/generation/utils.py:1468: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on mps, whereas the model is on cpu. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cpu') before running `.generate()`.
  4. warnings.warn(
  5. Traceback (most recent call last):
  6. File "/Users/mosherecanati/dev/local_star_coder/main.py", line 62, in <module>
  7. connect_to_star_coder()
  8. File "/Users/mosherecanati/dev/local_star_coder/main.py", line 56, in connect_to_star_coder
  9. outputs = model.generate(inputs)
  10. File "/Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  11. return func(*args, **kwargs)
  12. File "/Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1538, in generate
  13. return self.greedy_search(
  14. File "/Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 2359, in greedy_search
  15. model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  16. File "/Users/mosherecanati/dev/local_star_coder/venv/lib/python3.9/site-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py", line 752, in prepare_inputs_for_generation
  17. position_ids = attention_mask.long().cumsum(-1) - 1
  18. RuntimeError: MPS does not support cumsum op with int64 input
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement