Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- (viser) Connection opened (0, 1 total), 4 persistent messages
- chunks:
- !000, >000, >001, >002, >003, >004, >005, >006, >007, >008, >009, >010, >011, >012, >013, >014, >015, >016, >017, >018, >019
- Two passes (first) - chunking with `gt-nearest` strategy: total 1 forward(s) ...
- 0%| | 0/1 [00:00<?, ?it/s]C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:776.)
- out = F.scaled_dot_product_attention(q, k, v)
- C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:551.)
- out = F.scaled_dot_product_attention(q, k, v)
- C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:778.)
- out = F.scaled_dot_product_attention(q, k, v)
- C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:598.)
- out = F.scaled_dot_product_attention(q, k, v)
- C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:780.)
- out = F.scaled_dot_product_attention(q, k, v)
- C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py:71: UserWarning: CuDNN attention has been runtime disabled. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:528.)
- out = F.scaled_dot_product_attention(q, k, v)
- Exception in thread Thread-8 (worker):
- Traceback (most recent call last):
- File "C:\Python311\Lib\threading.py", line 1045, in _bootstrap_inner
- self.run()
- File "C:\Python311\Lib\threading.py", line 982, in run
- self._target(*self._args, **self._kwargs)
- File "C:\ai\Stable-Virtual-Camera\demo_gr.py", line 663, in worker
- for i, video_path in enumerate(video_path_generator):
- File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1783, in run_one_scene
- samples = do_sample(
- ^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1314, in do_sample
- samples_z = sampler(
- ^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1085, in __call__
- x = self.sampler_step(
- ^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\sampling.py", line 364, in sampler_step
- denoised = denoiser(*self.guider.prepare_inputs(x, sigma_hat, cond, uc))
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\eval.py", line 1315, in <lambda>
- lambda input, sigma, c: denoiser(
- ^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\sampling.py", line 150, in __call__
- network(input * c_in, c_noise, cond, **additional_model_inputs) * c_out
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\model.py", line 228, in forward
- return self.module(
- ^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\model.py", line 191, in forward
- h = module(
- ^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\modules\layers.py", line 78, in forward
- x = layer(x, context, num_frames)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 238, in forward
- x = block(x, context=context)
- ^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 107, in forward
- x = self.attn1(self.norm1(x)) + x
- ^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
- return forward_call(*args, **kwargs)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\ai\Stable-Virtual-Camera\seva\modules\transformer.py", line 71, in forward
- out = F.scaled_dot_product_attention(q, k, v)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- RuntimeError: No available kernel. Aborting execution.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement