Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- venv "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
- Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
- Version: v1.10.1-amd-21-g3cf53018
- Commit hash: 3cf530186f76d0005e4c791cca9a0d8f4aa013c4
- WARNING: you should not skip torch test unless you want CPU to work.
- Installing requirements
- C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
- warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
- no module 'xformers'. Processing without...
- no module 'xformers'. Processing without...
- No module 'xformers'. Proceeding without it.
- C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
- rank_zero_deprecation(
- Launching Web UI with arguments: --skip-torch-cuda-test --use-directml --device-id 1
- Warning: caught exception 'Something went wrong.', memory monitor disabled
- ONNX failed to initialize: module optimum.onnxruntime has no attribute ORTStableDiffusionXLPipeline
- Loading weights [338b85bc4f] from C:\Users\aarya\stable-diffusion-webui-amdgpu\models\Stable-diffusion\juggernaut_reborn.safetensors
- Creating model from config: C:\Users\aarya\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
- C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
- warnings.warn(
- Running on local URL: http://127.0.0.1:7860
- To create a public link, set `share=True` in `launch()`.
- Startup time: 64.3s (prepare environment: 72.5s, initialize shared: 1.7s, other imports: 0.1s, load scripts: 1.2s, create ui: 0.6s, gradio launch: 0.4s).
- Applying attention optimization: Doggettx... done.
- Model loaded in 6.8s (load weights from disk: 0.5s, create model: 0.8s, apply weights to model: 4.2s, apply half(): 0.1s, move model to device: 0.2s, calculate empty prompt: 1.0s).
- C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
- return unsafe_torch_load(filename, *args, **kwargs)
- 0%| | 0/20 [00:00<?, ?it/s]
- *** Error completing request
- *** Arguments: ('task(3yebpe6ninfw7x1)', <gradio.routes.Request object at 0x000001EF89A239D0>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
- Traceback (most recent call last):
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
- res = list(func(*args, **kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
- res = func(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
- res = func(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
- processed = processing.process_images(p)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
- res = process_images_inner(p)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\processing.py", line 1083, in process_images_inner
- samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\processing.py", line 1441, in sample
- samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in sample
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
- return func()
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
- return func(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
- denoised = model(x, sigmas[i] * s_in, **extra_args)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
- x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
- eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
- return self.inner_model.apply_model(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
- return self.__sub_func(self.__orig_func, *args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 50, in apply_model
- result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 36, in __call__
- return self.__orig_func(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
- x_recon = self.model(x_noisy, t, **cond)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
- out = self.diffusion_model(x, t, context=cc)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
- return original_forward(self, x, timesteps, context, *args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
- h = module(h, emb, context)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
- x = layer(x, context)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
- return self.__sub_func(self.__orig_func, *args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 96, in spatial_transformer_forward
- x = block(x, context=context[i])
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
- return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
- return func(*inputs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
- x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
- return self._call_impl(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_optimizations.py", line 247, in split_cross_attention_forward
- mem_free_total = get_available_vram()
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\sd_hijack_optimizations.py", line 176, in get_available_vram
- return torch.dml.mem_get_info(shared.device)[0]
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\dml\backend.py", line 18, in pdh_mem_get_info
- mem_info = DirectML.memory_provider.get_memory(get_device(device).index)
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\dml\memory.py", line 17, in get_memory
- paths_dedicated = expand_wildcard_path(f"\\GPU Process Memory(pid_{pid}_*_phys_{device_id})\\Dedicated Usage")
- File "C:\Users\aarya\stable-diffusion-webui-amdgpu\modules\dml\pdh\__init__.py", line 25, in expand_wildcard_path
- raise PDHError("Something went wrong.")
- modules.dml.pdh.errors.PDHError: Something went wrong.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement