Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- Version: 1.6.0
- Commit hash: <none>
- [Auto-Photoshop-SD] Attempting auto-update...
- [Auto-Photoshop-SD] switch branch to extension branch.
- checkout_result: Your branch is up to date with 'origin/master'.
- [Auto-Photoshop-SD] Current Branch.
- branch_result: * master
- [Auto-Photoshop-SD] Fetch upstream.
- fetch_result:
- [Auto-Photoshop-SD] Pull upstream.
- pull_result: Already up to date.
- Checking ReActor requirements... Ok
- Installing requirements for Shift Attention
- Launching Web UI with arguments: --xformers --api --autolaunch --skip-python-version-check --no-half
- python_server_full_path: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
- [-] ADetailer initialized. version: 23.9.3, num models: 9
- [AddNet] Updating model hashes...
- 0it [00:00, ?it/s]
- [AddNet] Updating model hashes...
- 0it [00:00, ?it/s]
- 2023-09-30 09:01:55,401 - ControlNet - INFO - ControlNet v1.1.410
- ControlNet preprocessor location: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-controlnet\annotator\downloads
- 2023-09-30 09:01:55,486 - ControlNet - INFO - ControlNet v1.1.410
- sd-webui-prompt-all-in-one background API service started successfully.
- 09:01:55 - ReActor - STATUS - Running v0.4.2-b3
- Loading weights [463d6a9fe8] from E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\models\Stable-diffusion\absolutereality_v181.safetensors
- Creating model from config: E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\configs\v1-inference.yaml
- *Deforum ControlNet support: enabled*
- Running on local URL: http://127.0.0.1:7860
- To create a public link, set `share=True` in `launch()`.
- Startup time: 19.1s (prepare environment: 7.6s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.6s, load scripts: 4.9s, create ui: 1.4s, gradio launch: 0.3s).
- Applying attention optimization: xformers... done.
- Model loaded in 6.3s (load weights from disk: 0.2s, create model: 0.7s, apply weights to model: 3.2s, apply float(): 1.2s, calculate empty prompt: 0.9s).
- 2023-09-30 09:02:44,727 - AnimateDiff - STATUS - AnimateDiff process start.
- 2023-09-30 09:02:44,728 - AnimateDiff - STATUS - You are using mm_sd_14.ckpt, which has been tested and supported.
- 2023-09-30 09:02:44,728 - AnimateDiff - STATUS - Loading motion module mm_sd_v14.ckpt from E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\model\mm_sd_v14.ckpt
- 2023-09-30 09:02:50,933 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
- 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Hacking GroupNorm32 forward function.
- 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet input blocks.
- 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet output blocks.
- 2023-09-30 09:02:51,840 - AnimateDiff - STATUS - Setting DDIM alpha.
- 2023-09-30 09:02:51,844 - AnimateDiff - STATUS - Injection finished.
- 2023-09-30 09:02:51,844 - AnimateDiff - STATUS - Hacking ControlNet.
- STATUS:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 16 images in a total of 1 batches.
- 0%| | 0/20 [00:15<?, ?it/s]
- *** Error completing request
- *** Arguments: ('task(9z2c137dlqp6zlj)', 'A handsome guy,', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000025828CC4EE0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000025828CC7A30>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC90310>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC93D60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002583EC91BA0>, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
- Traceback (most recent call last):
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 57, in f
- res = list(func(*args, **kwargs))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 36, in f
- res = func(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\txt2img.py", line 55, in txt2img
- processed = processing.process_images(p)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 732, in process_images
- res = process_images_inner(p)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 63, in hacked_processing_process_images_hijack
- return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 867, in process_images_inner
- samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 1140, in sample
- samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 235, in sample
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_common.py", line 261, in launch_sampling
- return func()
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
- return func(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
- x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
- eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
- return self.inner_model.apply_model(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 17, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 28, in __call__
- return self.__orig_func(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
- x_recon = self.model(x_noisy, t, **cond)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
- out = self.diffusion_model(x, t, context=cc)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\modules\sd_unet.py", line 91, in UNetModel_forward
- return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
- h = module(h, emb, context)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 86, in mm_tes_forward
- x = layer(x, context)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 86, in forward
- return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 150, in forward
- hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 212, in forward
- hidden_states = attention_block(
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 567, in forward
- hidden_states = self._memory_efficient_attention(query, key, value, attention_mask, optimizer_name)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-animatediff\motion_module.py", line 467, in _memory_efficient_attention
- hidden_states = xformers.ops.memory_efficient_attention(
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 223, in memory_efficient_attention
- return _memory_efficient_attention(
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 321, in _memory_efficient_attention
- return _memory_efficient_attention_forward(
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 341, in _memory_efficient_attention_forward
- out, *_ = op.apply(inp, needs_gradient=False)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 194, in apply
- return cls.apply_bmhk(inp, needs_gradient=needs_gradient)
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 243, in apply_bmhk
- out, lse, rng_seed, rng_offset = cls.OPERATOR(
- File "E:\StabilityMatrix\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
- return self._op(*args, **kwargs or {})
- RuntimeError: CUDA error: invalid configuration argument
- Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
- ---
Advertisement
Add Comment
Please, Sign In to add comment