Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- venv "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
- fatal: No names found, cannot describe anything.
- Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
- Version: 1.6.1
- Commit hash: 03eec1791be011e087985ae93c1f66315d5a250e
- #######################################################################################################
- Initializing Civitai Link
- If submitting an issue on github, please provide the below text for debugging purposes:
- Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
- Civitai Link revision: 136983e89859fd0477b4a437ed333142a6aa29a4
- SD-WebUI revision: 03eec1791be011e087985ae93c1f66315d5a250e
- Checking Civitai Link requirements...
- [+] python-socketio[client] version 5.7.2 installed.
- #######################################################################################################
- Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch --no-half-vae --share
- no module 'xformers'. Processing without...
- no module 'xformers'. Processing without...
- No module 'xformers'. Proceeding without it.
- ========================= a1111-sd-webui-lycoris =========================
- Starting from stable-diffusion-webui version 1.5.0
- a1111-sd-webui-lycoris extension is no longer needed
- All its features have been integrated into the native LoRA extension
- LyCORIS models can now be used as if there are regular LoRA models
- This extension has been automatically deactivated
- Please remove this extension
- ==========================================================================
- [-] ADetailer initialized. version: 23.11.1, num models: 10
- 2023-12-09 15:48:40,172 - ControlNet - INFO - ControlNet v1.1.419
- ControlNet preprocessor location: C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
- 2023-12-09 15:48:40,258 - ControlNet - INFO - ControlNet v1.1.419
- Civitai: API loaded
- Loading weights [ef76aa2332] from C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors
- Running on local URL: http://127.0.0.1:7860
- Creating model from config: C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\configs\v1-inference.yaml
- Applying attention optimization: sub-quadratic... done.
- Running on public URL: https://b67f6e752af408ac6d.gradio.live
- This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
- Civitai: Check resources for missing info files
- Civitai: Check resources for missing preview images
- Startup time: 27.4s (prepare environment: 4.6s, import torch: 5.8s, import gradio: 1.8s, setup paths: 2.0s, initialize shared: 1.8s, other imports: 0.7s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 2.4s, create ui: 0.6s, gradio launch: 7.5s).
- Civitai: Found 1 resources missing info files
- Civitai: Found 1 resources missing preview images
- Model loaded in 8.3s (load weights from disk: 0.8s, create model: 0.3s, apply weights to model: 6.3s, apply half(): 0.5s, calculate empty prompt: 0.3s).
- Civitai: No info found on Civitai
- Civitai: No preview images found on Civitai
- 1006
- load checkpoint from C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\models\BLIP\model_base_caption_capfilt_large.pth
- *** Error interrogating
- Traceback (most recent call last):
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 208, in interrogate
- matches = self.rank(image_features, cat.items, top_count=cat.topn)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 162, in rank
- text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 348, in encode_text
- x = self.transformer(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 203, in forward
- return self.resblocks(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
- input = module(input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 190, in forward
- x = x + self.attention(self.ln_1(x))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 187, in attention
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 486, in network_MultiheadAttention_forward
- return originals.MultiheadAttention_forward(self, *args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\activation.py", line 1189, in forward
- attn_output, attn_output_weights = F.multi_head_attention_forward(
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 5335, in multi_head_attention_forward
- attn_output = attn_output.permute(2, 0, 1, 3).contiguous().view(bsz * tgt_len, embed_dim)
- RuntimeError: Could not allocate tensor with 177408000 bytes. There is not enough GPU video memory available!
- ---
- *** Error interrogating
- Traceback (most recent call last):
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 208, in interrogate
- matches = self.rank(image_features, cat.items, top_count=cat.topn)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 162, in rank
- text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 348, in encode_text
- x = self.transformer(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 203, in forward
- return self.resblocks(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
- input = module(input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 191, in forward
- x = x + self.mlp(self.ln_2(x))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
- input = module(input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
- return originals.Linear_forward(self, input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
- return F.linear(input, self.weight, self.bias)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 17, in forward
- return op(*args, **kwargs)
- RuntimeError: Could not allocate tensor with 709632000 bytes. There is not enough GPU video memory available!
- ---
- 100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:40<00:00, 4.02s/it]
- Total progress: 100%|██████████████████████████████████████████████████████████████████| 10/10 [00:40<00:00, 4.01s/it]
- 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:29<00:00, 4.18s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:28<00:00, 4.10s/it]
- 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:34<00:00, 4.98s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:33<00:00, 4.77s/it]
- 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.43s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:36<00:00, 5.20s/it]
- 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:39<00:00, 5.70s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.43s/it]
- *** Error completing request█████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.29s/it]
- *** Arguments: ('task(tfxxkkz27ipgxzb)', 2, 'a flying plane with a white sky trail, architecture, blue sky, bridge, building, city, cityscape, cloud, cloudy sky, day, east asian architecture, horizon, house, mountain, no humans, ocean, outdoors, railing, road, scenery, sky, skyscraper, tower, tree', 'aid210 bad-hands-5 bad_prompt_version2-neg boring_e621_v4 easynegative', [], <PIL.Image.Image image mode=RGBA size=575x863 at 0x2768F42DA50>, None, {'image': <PIL.Image.Image image mode=RGBA size=3456x5184 at 0x2768F42DEA0>, 'mask': <PIL.Image.Image image mode=RGB size=3456x5184 at 0x2768F42CA00>}, None, None, None, None, 30, 'DPM++ 2M SDE Karras', 4, 0, 0, 1, 1, 9, 1.5, 0.2, 1, 768, 576, 1, 0, 0, 80, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002768E56FF70>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'Positive', 0, ', ', 'Generate and always save', 32) {}
- Traceback (most recent call last):
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
- res = list(func(*args, **kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
- res = func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\img2img.py", line 217, in img2img
- processed = process_images(p)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 733, in process_images
- res = process_images_inner(p)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
- return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 807, in process_images_inner
- p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 1500, in init
- self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 110, in images_tensor_to_samples
- x_latent = model.get_first_stage_encoding(model.encode_first_stage(image))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
- return self.__orig_func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
- return func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
- return self.first_stage_model.encode(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\lowvram.py", line 67, in first_stage_model_encode_wrap
- return first_stage_model_encode(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
- h = self.encoder(x)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 523, in forward
- hs = [self.conv_in(x)]
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
- return originals.Conv2d_forward(self, input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
- return self._conv_forward(input, self.weight, self.bias)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
- return F.conv2d(input, weight, bias, self.stride,
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 17, in forward
- return op(*args, **kwargs)
- RuntimeError
- ---
- 0%| | 0/7 [00:00<?, ?it/s]
- *** Error completing request
- *** Arguments: ('task(wnlacknvup9xdbt)', 2, 'a flying plane with a white sky trail, architecture, blue sky, bridge, building, city, cityscape, cloud, cloudy sky, day, east asian architecture, horizon, house, mountain, no humans, ocean, outdoors, railing, road, scenery, sky, skyscraper, tower, tree', 'aid210 bad-hands-5 bad_prompt_version2-neg boring_e621_v4 easynegative', [], <PIL.Image.Image image mode=RGBA size=575x863 at 0x276907C0EB0>, None, {'image': <PIL.Image.Image image mode=RGBA size=691x1036 at 0x276907C3730>, 'mask': <PIL.Image.Image image mode=RGB size=691x1036 at 0x276907C1FC0>}, None, None, None, None, 30, 'DPM++ 2M SDE Karras', 4, 0, 0, 1, 1, 9, 1.5, 0.2, 1, 768, 576, 1, 0, 0, 80, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002768E56FF40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'Positive', 0, ', ', 'Generate and always save', 32) {}
- Traceback (most recent call last):
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
- res = list(func(*args, **kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
- res = func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\img2img.py", line 217, in img2img
- processed = process_images(p)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 733, in process_images
- res = process_images_inner(p)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
- return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 879, in process_images_inner
- x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 594, in decode_latent_batch
- sample = decode_first_stage(model, batch[i:i + 1])[0]
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 76, in decode_first_stage
- return samples_to_images_tensor(x, approx_index, model)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 58, in samples_to_images_tensor
- x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
- return self.__orig_func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
- return func(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
- return self.first_stage_model.decode(z)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\lowvram.py", line 71, in first_stage_model_decode_wrap
- return first_stage_model_decode(z)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
- dec = self.decoder(z)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward
- h = self.up[i_level].block[i_block](h, temb)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 131, in forward
- h = self.norm1(h)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
- return forward_call(*args, **kwargs)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
- return originals.GroupNorm_forward(self, input)
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
- return F.group_norm(
- File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
- return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
- RuntimeError: Could not allocate tensor with 727056384 bytes. There is not enough GPU video memory available!
- ---
- 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [01:11<00:00, 10.16s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [01:06<00:00, 9.46s/it]
- Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [01:06<00:00, 9.44s/it]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement