Advertisement
delagarde

stable-virtual-camera errors

Mar 18th, 2025
62
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.89 KB | None | 0 0
  1. (viser) Connection opened (0, 1 total), 4 persistent messages
  2.  
  3. chunks:
  4. !000, >000, >001, >002, >003, >004, >005, >006, >007, >008, >009, >010, >011, >012, >013, >014, >015, >016, >017, >018, >019
  5. Two passes (first) - chunking with `gt-nearest` strategy: total 1 forward(s) ...
  6. 0%| | 0/1 [00:00<?, ?it/s]C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:776.)
  7. out = F.scaled_dot_product_attention(q, k, v)
  8. C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:551.)
  9. out = F.scaled_dot_product_attention(q, k, v)
  10. C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:778.)
  11. out = F.scaled_dot_product_attention(q, k, v)
  12. C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:598.)
  13. out = F.scaled_dot_product_attention(q, k, v)
  14. C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:780.)
  15. out = F.scaled_dot_product_attention(q, k, v)
  16. C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: CuDNN attention has been runtime disabled. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:528.)
  17. out = F.scaled_dot_product_attention(q, k, v)
  18. Exception in thread Thread-8 (worker):
  19. Traceback (most recent call last):
  20. File "C:\Python311\Lib\threading.py", line 1045, in _bootstrap_inner
  21. self.run()
  22. File "C:\Python311\Lib\threading.py", line 982, in run
  23. self._target(*self._args, **self._kwargs)
  24. File "C:\ai\Stable-Virtual-Camera\demo_gr.py", line 663, in worker
  25. for i, video_path in enumerate(video_path_generator):
  26. File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1783, in run_one_scene
  27. samples = do_sample(
  28. ^^^^^^^^^^
  29. File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1314, in do_sample
  30. samples_z = sampler(
  31. ^^^^^^^^
  32. File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1085, in __call__
  33. x = self.sampler_step(
  34. ^^^^^^^^^^^^^^^^^^
  35. File "C:\ai\Stable-Virtual-Camera\seva\sampling.py", line 364, in sampler_step
  36. denoised = denoiser(*self.guider.prepare_inputs(x, sigma_hat, cond, uc))
  37. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  38. File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1315, in <lambda>
  39. lambda input, sigma, c: denoiser(
  40. ^^^^^^^^^
  41. File "C:\ai\Stable-Virtual-Camera\seva\sampling.py", line 150, in __call__
  42. network(input * c_in, c_noise, cond, **additional_model_inputs) * c_out
  43. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  44. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  45. return self._call_impl(*args, **kwargs)
  46. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  47. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  48. return forward_call(*args, **kwargs)
  49. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  50. File "C:\ai\Stable-Virtual-Camera\seva\model.py", line 228, in forward
  51. return self.module(
  52. ^^^^^^^^^^^^
  53. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  54. return self._call_impl(*args, **kwargs)
  55. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  56. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  57. return forward_call(*args, **kwargs)
  58. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  59. File "C:\ai\Stable-Virtual-Camera\seva\model.py", line 191, in forward
  60. h = module(
  61. ^^^^^^^
  62. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  63. return self._call_impl(*args, **kwargs)
  64. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  65. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  66. return forward_call(*args, **kwargs)
  67. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  68. File "C:\ai\Stable-Virtual-Camera\seva\modules\layers.py", line 78, in forward
  69. x = layer(x, context, num_frames)
  70. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  71. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  72. return self._call_impl(*args, **kwargs)
  73. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  74. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  75. return forward_call(*args, **kwargs)
  76. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  77. File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 238, in forward
  78. x = block(x, context=context)
  79. ^^^^^^^^^^^^^^^^^^^^^^^^^
  80. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  81. return self._call_impl(*args, **kwargs)
  82. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  83. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  84. return forward_call(*args, **kwargs)
  85. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  86. File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 107, in forward
  87. x = self.attn1(self.norm1(x)) + x
  88. ^^^^^^^^^^^^^^^^^^^^^^^^^
  89. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
  90. return self._call_impl(*args, **kwargs)
  91. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  92. File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
  93. return forward_call(*args, **kwargs)
  94. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  95. File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 71, in forward
  96. out = F.scaled_dot_product_attention(q, k, v)
  97. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  98. RuntimeError: No available kernel. Aborting execution.
  99.  
Tags: error
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement