Guest User

Untitled

a guest
Sep 29th, 2023
115
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 16.27 KB | None | 0 0
  1. Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  2. Version: 1.6.0
  3. Commit hash: <none>
  4. [Auto-Photoshop-SD] Attempting auto-update...
  5. [Auto-Photoshop-SD] switch branch to extension branch.
  6. checkout_result: Your branch is up to date with 'origin/master'.
  7.  
  8. [Auto-Photoshop-SD] Current Branch.
  9. branch_result: * master
  10.  
  11. [Auto-Photoshop-SD] Fetch upstream.
  12. fetch_result:
  13. [Auto-Photoshop-SD] Pull upstream.
  14. pull_result: Already up to date.
  15. Checking ReActor requirements... Ok
  16. Installing requirements for Shift Attention
  17. Launching Web UI with arguments: --xformers --api --autolaunch --skip-python-version-check --no-half
  18. python_server_full_path: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
  19. [-] ADetailer initialized. version: 23.9.3, num models: 9
  20. [AddNet] Updating model hashes...
  21. 0it [00:00, ?it/s]
  22. [AddNet] Updating model hashes...
  23. 0it [00:00, ?it/s]
  24. 2023-09-30 09:01:55,401 - ControlNet - INFO - ControlNet v1.1.410
  25. ControlNet preprocessor location: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-controlnet\annotator\downloads
  26. 2023-09-30 09:01:55,486 - ControlNet - INFO - ControlNet v1.1.410
  27. sd-webui-prompt-all-in-one background API service started successfully.
  28. 09:01:55 - ReActor - STATUS - Running v0.4.2-b3
  29. Loading weights [463d6a9fe8] from E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\models\Stable-diffusion\absolutereality_v181.safetensors
  30. Creating model from config: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\configs\v1-inference.yaml
  31. *Deforum ControlNet support: enabled*
  32. Running on local URL: http://127.0.0.1:7860
  33.  
  34. To create a public link, set `share=True` in `launch()`.
  35. Startup time: 19.1s (prepare environment: 7.6s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.6s, load scripts: 4.9s, create ui: 1.4s, gradio launch: 0.3s).
  36. Applying attention optimization: xformers... done.
  37. Model loaded in 6.3s (load weights from disk: 0.2s, create model: 0.7s, apply weights to model: 3.2s, apply float(): 1.2s, calculate empty prompt: 0.9s).
  38. 2023-09-30 09:02:44,727 - AnimateDiff - STATUS - AnimateDiff process start.
  39. 2023-09-30 09:02:44,728 - AnimateDiff - STATUS - You are using mm_sd_14.ckpt, which has been tested and supported.
  40. 2023-09-30 09:02:44,728 - AnimateDiff - STATUS - Loading motion module mm_sd_v14.ckpt from E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\model\mm_sd_v14.ckpt
  41. 2023-09-30 09:02:50,933 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
  42. 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Hacking GroupNorm32 forward function.
  43. 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet input blocks.
  44. 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet output blocks.
  45. 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Setting DDIM alpha.
  46. 2023-09-30 09:02:51,844 - AnimateDiff - STATUS - Injection finished.
  47. 2023-09-30 09:02:51,844 - AnimateDiff - STATUS - Hacking ControlNet.
  48. STATUS:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 16 images in a total of 1 batches.
  49. 0%| | 0/20 [00:15<?, ?it/s]
  50. *** Error completing request
  51. *** Arguments: ('task(9z2c137dlqp6zlj)', 'A handsome guy,', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000025828CC4EE0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000025828CC7A30>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC90310>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC93D60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC91BA0>, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
  52. Traceback (most recent call last):
  53. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 57, in f
  54. res = list(func(*args, **kwargs))
  55. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 36, in f
  56. res = func(*args, **kwargs)
  57. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\txt2img.py", line 55, in txt2img
  58. processed = processing.process_images(p)
  59. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 732, in process_images
  60. res = process_images_inner(p)
  61. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 63, in hacked_processing_process_images_hijack
  62. return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  63. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 867, in process_images_inner
  64. samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  65. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 1140, in sample
  66. samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  67. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 235, in sample
  68. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  69. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_common.py", line 261, in launch_sampling
  70. return func()
  71. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
  72. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  73. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  74. return func(*args, **kwargs)
  75. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
  76. denoised = model(x, sigmas[i] * s_in, **extra_args)
  77. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  78. return forward_call(*args, **kwargs)
  79. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
  80. x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
  81. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  82. return forward_call(*args, **kwargs)
  83. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
  84. eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  85. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
  86. return self.inner_model.apply_model(*args, **kwargs)
  87. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 17, in <lambda>
  88. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  89. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 28, in __call__
  90. return self.__orig_func(*args, **kwargs)
  91. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
  92. x_recon = self.model(x_noisy, t, **cond)
  93. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  94. return forward_call(*args, **kwargs)
  95. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
  96. out = self.diffusion_model(x, t, context=cc)
  97. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  98. return forward_call(*args, **kwargs)
  99. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_unet.py", line 91, in UNetModel_forward
  100. return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
  101. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
  102. h = module(h, emb, context)
  103. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  104. return forward_call(*args, **kwargs)
  105. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 86, in mm_tes_forward
  106. x = layer(x, context)
  107. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  108. return forward_call(*args, **kwargs)
  109. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 86, in forward
  110. return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  111. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  112. return forward_call(*args, **kwargs)
  113. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 150, in forward
  114. hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
  115. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  116. return forward_call(*args, **kwargs)
  117. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 212, in forward
  118. hidden_states = attention_block(
  119. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  120. return forward_call(*args, **kwargs)
  121. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 567, in forward
  122. hidden_states = self._memory_efficient_attention(query, key, value, attention_mask, optimizer_name)
  123. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 467, in _memory_efficient_attention
  124. hidden_states = xformers.ops.memory_efficient_attention(
  125. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 223, in memory_efficient_attention
  126. return _memory_efficient_attention(
  127. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 321, in _memory_efficient_attention
  128. return _memory_efficient_attention_forward(
  129. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 341, in _memory_efficient_attention_forward
  130. out, *_ = op.apply(inp, needs_gradient=False)
  131. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 194, in apply
  132. return cls.apply_bmhk(inp, needs_gradient=needs_gradient)
  133. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 243, in apply_bmhk
  134. out, lse, rng_seed, rng_offset = cls.OPERATOR(
  135. File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
  136. return self._op(*args, **kwargs or {})
  137. RuntimeError: CUDA error: invalid configuration argument
  138. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
  139.  
  140.  
  141. ---
Advertisement
Add Comment
Please, Sign In to add comment