Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ============================= test session starts ==============================
- platform linux -- Python 3.10.12, pytest-8.1.1, pluggy-1.4.0
- rootdir: /home/sayak/diffusers
- configfile: pyproject.toml
- plugins: requests-mock-1.10.0, anyio-4.3.0, xdist-3.6.1, timeout-2.3.1
- collected 1 item
- tests/models/transformers/test_models_transformer_hunyuan_video.py F [100%]
- =================================== FAILURES ===================================
- _ HunyuanVideoTransformer3DTests.test_torch_compile_recompilation_and_graph_break _
- self = <tests.models.transformers.test_models_transformer_hunyuan_video.HunyuanVideoTransformer3DTests testMethod=test_torch_compile_recompilation_and_graph_break>
- @require_torch_gpu
- @require_torch_2
- @is_torch_compile
- @slow
- def test_torch_compile_recompilation_and_graph_break(self):
- torch._dynamo.reset()
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict).to(torch_device)
- model = torch.compile(model, fullgraph=True)
- with torch._dynamo.config.patch(error_on_recompile=True), torch.no_grad():
- > _ = model(**inputs_dict)
- tests/models/transformers/test_models_transformer_hunyuan_video.py:109:
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
- ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py:1751: in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py:1762: in _call_impl
- return forward_call(*args, **kwargs)
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
- args = ()
- kwargs = {'encoder_attention_mask': tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0'), 'encoder_hidde...1, -0.0838],
- [-0.8856, 0.9521, -0.2237, ..., 1.4458, 1.4366, 1.3692]]]]],
- device='cuda:0'), ...}
- prior = None, cleanups = [<function nothing at 0x75dbb318f5b0>]
- prior_skip_guard_eval_unsafe = False, saved_dynamic_layer_stack_depth = 0
- cleanup = <function nothing at 0x75dbb318f5b0>
- @functools.wraps(fn)
- def _fn(*args, **kwargs):
- prior = set_eval_frame(None)
- try:
- if is_fx_tracing():
- if config.error_on_nested_fx_trace:
- raise RuntimeError(
- "Detected that you are using FX to symbolically trace "
- "a dynamo-optimized function. This is not supported at the moment."
- )
- else:
- return fn(*args, **kwargs)
- if is_jit_tracing():
- raise RuntimeError(
- "Detected that you are using FX to torch.jit.trace "
- "a dynamo-optimized function. This is not supported at the moment."
- )
- cleanups = [enter() for enter in self.enter_exit_hooks]
- prior_skip_guard_eval_unsafe = set_skip_guard_eval_unsafe(
- _is_skip_guard_eval_unsafe_stance()
- )
- # Ensure that if an assertion occurs after graph pushes
- # something onto the DynamicLayerStack then we pop it off (the
- # constructed graph code isn't guarded with try/finally).
- #
- # This used to be a context but putting a `with` here is a noticible
- # perf regression (#126293)
- saved_dynamic_layer_stack_depth = (
- torch._C._functorch.get_dynamic_layer_stack_depth()
- )
- _maybe_set_eval_frame(_callback_from_stance(callback))
- try:
- return fn(*args, **kwargs)
- except Unsupported as e:
- if config.verbose:
- raise
- > raise e.with_traceback(None) from None
- E torch._dynamo.exc.Unsupported: Dynamic slicing with Tensor arguments
- E Explanation: Creating slices with Tensor arguments is not supported. e.g. `l[:x]`, where `x` is a 1-element tensor.
- E Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
- E
- E Developer debug context: SliceVariable start: ConstantVariable(NoneType: None), stop: TensorVariable(), step: ConstantVariable(NoneType: None)
- E
- E
- E from user code:
- E File "/home/sayak/diffusers/src/diffusers/models/transformers/transformer_hunyuan_video.py", line 1079, in forward
- E attention_mask[i, : effective_sequence_length[i]] = True
- E
- E Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
- ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:659: Unsupported
- =============================== warnings summary ===============================
- ../.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/pydantic/fields.py:826
- /home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/pydantic/fields.py:826: PydanticDeprecatedSince20: Using extra keyword arguments on `Field` is deprecated and will be removed. Use `json_schema_extra` instead. (Extra keys: 'new_param'). Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.9/migration/
- warn(
- -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- =========================== short test summary info ============================
- FAILED tests/models/transformers/test_models_transformer_hunyuan_video.py::HunyuanVideoTransformer3DTests::test_torch_compile_recompilation_and_graph_break
- ========================= 1 failed, 1 warning in 2.24s =========================
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement