Advertisement
Guest User

Weird Inpaint bug

a guest
Dec 9th, 2023
78
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 35.98 KB | None | 0 0
  1. venv "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
  2. fatal: No names found, cannot describe anything.
  3. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
  4. Version: 1.6.1
  5. Commit hash: 03eec1791be011e087985ae93c1f66315d5a250e
  6. #######################################################################################################
  7. Initializing Civitai Link
  8. If submitting an issue on github, please provide the below text for debugging purposes:
  9.  
  10. Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
  11. Civitai Link revision: 136983e89859fd0477b4a437ed333142a6aa29a4
  12. SD-WebUI revision: 03eec1791be011e087985ae93c1f66315d5a250e
  13.  
  14. Checking Civitai Link requirements...
  15. [+] python-socketio[client] version 5.7.2 installed.
  16.  
  17. #######################################################################################################
  18. Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch --no-half-vae --share
  19. no module 'xformers'. Processing without...
  20. no module 'xformers'. Processing without...
  21. No module 'xformers'. Proceeding without it.
  22.  
  23. ========================= a1111-sd-webui-lycoris =========================
  24. Starting from stable-diffusion-webui version 1.5.0
  25. a1111-sd-webui-lycoris extension is no longer needed
  26.  
  27. All its features have been integrated into the native LoRA extension
  28. LyCORIS models can now be used as if there are regular LoRA models
  29.  
  30. This extension has been automatically deactivated
  31. Please remove this extension
  32. ==========================================================================
  33.  
  34. [-] ADetailer initialized. version: 23.11.1, num models: 10
  35. 2023-12-09 15:48:40,172 - ControlNet - INFO - ControlNet v1.1.419
  36. ControlNet preprocessor location: C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
  37. 2023-12-09 15:48:40,258 - ControlNet - INFO - ControlNet v1.1.419
  38. Civitai: API loaded
  39. Loading weights [ef76aa2332] from C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors
  40. Running on local URL: http://127.0.0.1:7860
  41. Creating model from config: C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\configs\v1-inference.yaml
  42. Applying attention optimization: sub-quadratic... done.
  43. Running on public URL: https://b67f6e752af408ac6d.gradio.live
  44.  
  45. This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
  46. Civitai: Check resources for missing info files
  47. Civitai: Check resources for missing preview images
  48. Startup time: 27.4s (prepare environment: 4.6s, import torch: 5.8s, import gradio: 1.8s, setup paths: 2.0s, initialize shared: 1.8s, other imports: 0.7s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 2.4s, create ui: 0.6s, gradio launch: 7.5s).
  49. Civitai: Found 1 resources missing info files
  50. Civitai: Found 1 resources missing preview images
  51. Model loaded in 8.3s (load weights from disk: 0.8s, create model: 0.3s, apply weights to model: 6.3s, apply half(): 0.5s, calculate empty prompt: 0.3s).
  52. Civitai: No info found on Civitai
  53. Civitai: No preview images found on Civitai
  54. 1006
  55. load checkpoint from C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\models\BLIP\model_base_caption_capfilt_large.pth
  56. *** Error interrogating
  57. Traceback (most recent call last):
  58. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 208, in interrogate
  59. matches = self.rank(image_features, cat.items, top_count=cat.topn)
  60. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 162, in rank
  61. text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
  62. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 348, in encode_text
  63. x = self.transformer(x)
  64. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  65. return forward_call(*args, **kwargs)
  66. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 203, in forward
  67. return self.resblocks(x)
  68. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  69. return forward_call(*args, **kwargs)
  70. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
  71. input = module(input)
  72. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  73. return forward_call(*args, **kwargs)
  74. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 190, in forward
  75. x = x + self.attention(self.ln_1(x))
  76. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 187, in attention
  77. return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
  78. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  79. return forward_call(*args, **kwargs)
  80. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 486, in network_MultiheadAttention_forward
  81. return originals.MultiheadAttention_forward(self, *args, **kwargs)
  82. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\activation.py", line 1189, in forward
  83. attn_output, attn_output_weights = F.multi_head_attention_forward(
  84. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 5335, in multi_head_attention_forward
  85. attn_output = attn_output.permute(2, 0, 1, 3).contiguous().view(bsz * tgt_len, embed_dim)
  86. RuntimeError: Could not allocate tensor with 177408000 bytes. There is not enough GPU video memory available!
  87.  
  88. ---
  89. *** Error interrogating
  90. Traceback (most recent call last):
  91. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 208, in interrogate
  92. matches = self.rank(image_features, cat.items, top_count=cat.topn)
  93. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\interrogate.py", line 162, in rank
  94. text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
  95. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 348, in encode_text
  96. x = self.transformer(x)
  97. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  98. return forward_call(*args, **kwargs)
  99. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 203, in forward
  100. return self.resblocks(x)
  101. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  102. return forward_call(*args, **kwargs)
  103. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
  104. input = module(input)
  105. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  106. return forward_call(*args, **kwargs)
  107. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\clip\model.py", line 191, in forward
  108. x = x + self.mlp(self.ln_2(x))
  109. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  110. return forward_call(*args, **kwargs)
  111. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
  112. input = module(input)
  113. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  114. return forward_call(*args, **kwargs)
  115. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
  116. return originals.Linear_forward(self, input)
  117. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
  118. return F.linear(input, self.weight, self.bias)
  119. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
  120. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
  121. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 17, in forward
  122. return op(*args, **kwargs)
  123. RuntimeError: Could not allocate tensor with 709632000 bytes. There is not enough GPU video memory available!
  124.  
  125. ---
  126. 100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:40<00:00, 4.02s/it]
  127. Total progress: 100%|██████████████████████████████████████████████████████████████████| 10/10 [00:40<00:00, 4.01s/it]
  128. 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:29<00:00, 4.18s/it]
  129. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:28<00:00, 4.10s/it]
  130. 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:34<00:00, 4.98s/it]
  131. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:33<00:00, 4.77s/it]
  132. 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.43s/it]
  133. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:36<00:00, 5.20s/it]
  134. 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:39<00:00, 5.70s/it]
  135. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.43s/it]
  136. *** Error completing request█████████████████████████████████████████████████████████████| 7/7 [00:38<00:00, 5.29s/it]
  137. *** Arguments: ('task(tfxxkkz27ipgxzb)', 2, 'a flying plane with a white sky trail, architecture, blue sky, bridge, building, city, cityscape, cloud, cloudy sky, day, east asian architecture, horizon, house, mountain, no humans, ocean, outdoors, railing, road, scenery, sky, skyscraper, tower, tree', 'aid210 bad-hands-5 bad_prompt_version2-neg boring_e621_v4 easynegative', [], <PIL.Image.Image image mode=RGBA size=575x863 at 0x2768F42DA50>, None, {'image': <PIL.Image.Image image mode=RGBA size=3456x5184 at 0x2768F42DEA0>, 'mask': <PIL.Image.Image image mode=RGB size=3456x5184 at 0x2768F42CA00>}, None, None, None, None, 30, 'DPM++ 2M SDE Karras', 4, 0, 0, 1, 1, 9, 1.5, 0.2, 1, 768, 576, 1, 0, 0, 80, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002768E56FF70>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'Positive', 0, ', ', 'Generate and always save', 32) {}
  138. Traceback (most recent call last):
  139. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
  140. res = list(func(*args, **kwargs))
  141. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
  142. res = func(*args, **kwargs)
  143. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\img2img.py", line 217, in img2img
  144. processed = process_images(p)
  145. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 733, in process_images
  146. res = process_images_inner(p)
  147. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
  148. return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  149. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 807, in process_images_inner
  150. p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  151. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 1500, in init
  152. self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
  153. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 110, in images_tensor_to_samples
  154. x_latent = model.get_first_stage_encoding(model.encode_first_stage(image))
  155. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
  156. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  157. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
  158. return self.__orig_func(*args, **kwargs)
  159. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  160. return func(*args, **kwargs)
  161. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
  162. return self.first_stage_model.encode(x)
  163. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\lowvram.py", line 67, in first_stage_model_encode_wrap
  164. return first_stage_model_encode(x)
  165. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
  166. h = self.encoder(x)
  167. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  168. return forward_call(*args, **kwargs)
  169. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 523, in forward
  170. hs = [self.conv_in(x)]
  171. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  172. return forward_call(*args, **kwargs)
  173. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
  174. return originals.Conv2d_forward(self, input)
  175. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
  176. return self._conv_forward(input, self.weight, self.bias)
  177. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
  178. return F.conv2d(input, weight, bias, self.stride,
  179. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
  180. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
  181. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 17, in forward
  182. return op(*args, **kwargs)
  183. RuntimeError
  184.  
  185. ---
  186. 0%| | 0/7 [00:00<?, ?it/s]
  187. *** Error completing request
  188. *** Arguments: ('task(wnlacknvup9xdbt)', 2, 'a flying plane with a white sky trail, architecture, blue sky, bridge, building, city, cityscape, cloud, cloudy sky, day, east asian architecture, horizon, house, mountain, no humans, ocean, outdoors, railing, road, scenery, sky, skyscraper, tower, tree', 'aid210 bad-hands-5 bad_prompt_version2-neg boring_e621_v4 easynegative', [], <PIL.Image.Image image mode=RGBA size=575x863 at 0x276907C0EB0>, None, {'image': <PIL.Image.Image image mode=RGBA size=691x1036 at 0x276907C3730>, 'mask': <PIL.Image.Image image mode=RGB size=691x1036 at 0x276907C1FC0>}, None, None, None, None, 30, 'DPM++ 2M SDE Karras', 4, 0, 0, 1, 1, 9, 1.5, 0.2, 1, 768, 576, 1, 0, 0, 80, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002768E56FF40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'Positive', 0, ', ', 'Generate and always save', 32) {}
  189. Traceback (most recent call last):
  190. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
  191. res = list(func(*args, **kwargs))
  192. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
  193. res = func(*args, **kwargs)
  194. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\img2img.py", line 217, in img2img
  195. processed = process_images(p)
  196. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 733, in process_images
  197. res = process_images_inner(p)
  198. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
  199. return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  200. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 879, in process_images_inner
  201. x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
  202. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\processing.py", line 594, in decode_latent_batch
  203. sample = decode_first_stage(model, batch[i:i + 1])[0]
  204. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 76, in decode_first_stage
  205. return samples_to_images_tensor(x, approx_index, model)
  206. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 58, in samples_to_images_tensor
  207. x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
  208. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
  209. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  210. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
  211. return self.__orig_func(*args, **kwargs)
  212. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  213. return func(*args, **kwargs)
  214. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
  215. return self.first_stage_model.decode(z)
  216. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\modules\lowvram.py", line 71, in first_stage_model_decode_wrap
  217. return first_stage_model_decode(z)
  218. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
  219. dec = self.decoder(z)
  220. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  221. return forward_call(*args, **kwargs)
  222. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward
  223. h = self.up[i_level].block[i_block](h, temb)
  224. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  225. return forward_call(*args, **kwargs)
  226. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 131, in forward
  227. h = self.norm1(h)
  228. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
  229. return forward_call(*args, **kwargs)
  230. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
  231. return originals.GroupNorm_forward(self, input)
  232. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
  233. return F.group_norm(
  234. File "C:\Users\NONE\Documents\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
  235. return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
  236. RuntimeError: Could not allocate tensor with 727056384 bytes. There is not enough GPU video memory available!
  237.  
  238. ---
  239. 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [01:11<00:00, 10.16s/it]
  240. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [01:06<00:00, 9.46s/it]
  241. Total progress: 100%|████████████████████████████████████████████████████████████████████| 7/7 [01:06<00:00, 9.44s/it]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement