Advertisement
Guest User

Untitled

a guest
Apr 28th, 2025
26
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.07 KB | None | 0 0
  1. ============================= test session starts ==============================
  2. platform linux -- Python 3.10.12, pytest-8.1.1, pluggy-1.4.0
  3. rootdir: /home/sayak/diffusers
  4. configfile: pyproject.toml
  5. plugins: requests-mock-1.10.0, anyio-4.3.0, xdist-3.6.1, timeout-2.3.1
  6. collected 1 item
  7.  
  8. tests/models/transformers/test_models_transformer_hunyuan_video.py F [100%]
  9.  
  10. =================================== FAILURES ===================================
  11. _ HunyuanVideoTransformer3DTests.test_torch_compile_recompilation_and_graph_break _
  12.  
  13. self = <tests.models.transformers.test_models_transformer_hunyuan_video.HunyuanVideoTransformer3DTests testMethod=test_torch_compile_recompilation_and_graph_break>
  14.  
  15. @require_torch_gpu
  16. @require_torch_2
  17. @is_torch_compile
  18. @slow
  19. def test_torch_compile_recompilation_and_graph_break(self):
  20. torch._dynamo.reset()
  21. init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
  22.  
  23. model = self.model_class(**init_dict).to(torch_device)
  24. model = torch.compile(model, fullgraph=True)
  25.  
  26. with torch._dynamo.config.patch(error_on_recompile=True), torch.no_grad():
  27. > _ = model(**inputs_dict)
  28.  
  29. tests/models/transformers/test_models_transformer_hunyuan_video.py:109:
  30. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
  31. ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py:1751: in _wrapped_call_impl
  32. return self._call_impl(*args, **kwargs)
  33. ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py:1762: in _call_impl
  34. return forward_call(*args, **kwargs)
  35. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
  36.  
  37. args = ()
  38. kwargs = {'encoder_attention_mask': tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0'), 'encoder_hidde...1, -0.0838],
  39. [-0.8856, 0.9521, -0.2237, ..., 1.4458, 1.4366, 1.3692]]]]],
  40. device='cuda:0'), ...}
  41. prior = None, cleanups = [<function nothing at 0x75dbb318f5b0>]
  42. prior_skip_guard_eval_unsafe = False, saved_dynamic_layer_stack_depth = 0
  43. cleanup = <function nothing at 0x75dbb318f5b0>
  44.  
  45. @functools.wraps(fn)
  46. def _fn(*args, **kwargs):
  47. prior = set_eval_frame(None)
  48. try:
  49. if is_fx_tracing():
  50. if config.error_on_nested_fx_trace:
  51. raise RuntimeError(
  52. "Detected that you are using FX to symbolically trace "
  53. "a dynamo-optimized function. This is not supported at the moment."
  54. )
  55. else:
  56. return fn(*args, **kwargs)
  57.  
  58. if is_jit_tracing():
  59. raise RuntimeError(
  60. "Detected that you are using FX to torch.jit.trace "
  61. "a dynamo-optimized function. This is not supported at the moment."
  62. )
  63.  
  64. cleanups = [enter() for enter in self.enter_exit_hooks]
  65. prior_skip_guard_eval_unsafe = set_skip_guard_eval_unsafe(
  66. _is_skip_guard_eval_unsafe_stance()
  67. )
  68.  
  69. # Ensure that if an assertion occurs after graph pushes
  70. # something onto the DynamicLayerStack then we pop it off (the
  71. # constructed graph code isn't guarded with try/finally).
  72. #
  73. # This used to be a context but putting a `with` here is a noticible
  74. # perf regression (#126293)
  75. saved_dynamic_layer_stack_depth = (
  76. torch._C._functorch.get_dynamic_layer_stack_depth()
  77. )
  78. _maybe_set_eval_frame(_callback_from_stance(callback))
  79.  
  80. try:
  81. return fn(*args, **kwargs)
  82. except Unsupported as e:
  83. if config.verbose:
  84. raise
  85. > raise e.with_traceback(None) from None
  86. E torch._dynamo.exc.Unsupported: Dynamic slicing with Tensor arguments
  87. E Explanation: Creating slices with Tensor arguments is not supported. e.g. `l[:x]`, where `x` is a 1-element tensor.
  88. E Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
  89. E
  90. E Developer debug context: SliceVariable start: ConstantVariable(NoneType: None), stop: TensorVariable(), step: ConstantVariable(NoneType: None)
  91. E
  92. E
  93. E from user code:
  94. E File "/home/sayak/diffusers/src/diffusers/models/transformers/transformer_hunyuan_video.py", line 1079, in forward
  95. E attention_mask[i, : effective_sequence_length[i]] = True
  96. E
  97. E Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
  98.  
  99. ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:659: Unsupported
  100. =============================== warnings summary ===============================
  101. ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/pydantic/fields.py:826
  102. /home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/pydantic/fields.py:826: PydanticDeprecatedSince20: Using extra keyword arguments on `Field` is deprecated and will be removed. Use `json_schema_extra` instead. (Extra keys: 'new_param'). Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.9/migration/
  103. warn(
  104.  
  105. -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
  106. =========================== short test summary info ============================
  107. FAILED tests/models/transformers/test_models_transformer_hunyuan_video.py::HunyuanVideoTransformer3DTests::test_torch_compile_recompilation_and_graph_break
  108. ========================= 1 failed, 1 warning in 2.24s =========================
  109.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement