Advertisement
Guest User

Untitled

a guest
May 28th, 2024
66
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.87 KB | None | 0 0
  1. Error on frame 0 request:
  2.  
  3. Traceback (most recent call last):
  4. File "src\cython\vapoursynth.pyx", line 2941, in vapoursynth.publicFunction
  5. File "src\cython\vapoursynth.pyx", line 2943, in vapoursynth.publicFunction
  6. File "src\cython\vapoursynth.pyx", line 683, in vapoursynth.FuncData.__call__
  7. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\__init__.py", line 96, in inference
  8. output = ppaint.get_unmasked_frames(frames, batch_size, use_half, True)
  9. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  10. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\propainter_render.py", line 199, in get_unmasked_frames
  11. self.inference(batch_size, use_half)
  12. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\propainter_render.py", line 252, in inference
  13. self.pred_flows_bi, _ = self.fix_flow_complete.forward_bidirect_flow(gt_flows_bi, self.flow_masks)
  14. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  15. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\model\recurrent_flow_completion.py", line 327, in forward_bidirect_flow
  16. pred_flows_forward, pred_edges_forward = self.forward(masked_flows_forward, masks_forward)
  17. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  18. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\model\recurrent_flow_completion.py", line 288, in forward
  19. feat_prop = self.feat_prop_module(feat_mid)
  20. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  21. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
  22. return self._call_impl(*args, **kwargs)
  23. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  24. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
  25. return forward_call(*args, **kwargs)
  26. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  27. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\model\recurrent_flow_completion.py", line 101, in forward
  28. feat_prop = self.deform_align[module_name](feat_prop, cond)
  29. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  30. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
  31. return self._call_impl(*args, **kwargs)
  32. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  33. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
  34. return forward_call(*args, **kwargs)
  35. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  36. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vspropainter\model\recurrent_flow_completion.py", line 42, in forward
  37. return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias,
  38. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  39. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torchvision\ops\deform_conv.py", line 92, in deform_conv2d
  40. return torch.ops.torchvision.deform_conv2d(
  41. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  42. File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\_ops.py", line 692, in __call__
  43. return self._op(*args, **kwargs or {})
  44. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  45. NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
  46.  
  47. CPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
  48. BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
  49. Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback]
  50. FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
  51. Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback]
  52. Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
  53. Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
  54. Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
  55. ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
  56. ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
  57. AutogradOther: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  58. AutogradCPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  59. AutogradCUDA: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  60. AutogradHIP: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  61. AutogradXLA: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  62. AutogradMPS: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  63. AutogradIPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  64. AutogradXPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  65. AutogradHPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  66. AutogradVE: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  67. AutogradLazy: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  68. AutogradMTIA: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  69. AutogradPrivateUse1: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  70. AutogradPrivateUse2: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  71. AutogradPrivateUse3: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  72. AutogradMeta: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  73. AutogradNestedTensor: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
  74. Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback]
  75. AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback]
  76. AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback]
  77. FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback]
  78. FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
  79. Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
  80. VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
  81. FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
  82. PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback]
  83. FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
  84. PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback]
  85. PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement