Advertisement
Guest User

Untitled

a guest
Aug 5th, 2023
169
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 12.29 KB | None | 0 0
  1. venv "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
  2. Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  3. Commit hash: fd59537df9420bb14c1d6330ec59e30ce870a481
  4. Installing requirements for Web UI
  5.  
  6.  
  7. Launching Web UI with arguments:
  8. Interrogations are fallen back to cpu. This doesn't affect on image generation. But if you want to use interrogate (CLIP or DeepBooru), check out this issue: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/10
  9. Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
  10. No module 'xformers'. Proceeding without it.
  11. Civitai Helper: Get Custom Model Folder
  12. Civitai Helper: Load setting from: C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
  13. Civitai Helper: No setting file, use default
  14. Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
  15. 2023-08-05 11:18:39,491 - ControlNet - INFO - ControlNet v1.1.234
  16. ControlNet preprocessor location: C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
  17. 2023-08-05 11:18:39,550 - ControlNet - INFO - ControlNet v1.1.234
  18. Loading weights [301ef69a1a] from C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\models\Stable-diffusion\SaltedToffeeMix_v1.safetensors
  19. Creating model from config: C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\configs\v1-inference.yaml
  20. LatentDiffusion: Running in eps-prediction mode
  21. DiffusionWrapper has 859.52 M params.
  22. Applying cross attention optimization (InvokeAI).
  23. Textual inversion embeddings loaded(16): bad-hands-5, bad-picture-chill-75v, badhandsv5-neg, badhandv4, bad_prompt_version2, boring_e621, boring_e621_fluffyrock_v4, bwu, By bad artist -neg, deformityv6, dfc, easynegative, ng_deepnegative_v1_75t, ubbp, updn, verybadimagenegative_v1.3
  24. Model loaded in 6.1s (load weights from disk: 0.2s, create model: 0.3s, apply weights to model: 3.1s, apply half(): 0.9s, move model to device: 1.5s).
  25. Error executing callback ui_tabs_callback for C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\extensions\Styles-Editor\scripts\main.py
  26. Traceback (most recent call last):
  27. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\script_callbacks.py", line 125, in ui_tabs_callback
  28. res += c.callback() or []
  29. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\extensions\Styles-Editor\scripts\main.py", line 237, in on_ui_tabs
  30. cls.dataeditor.input(fn=cls.handle_dataeditor_input, inputs=[cls.dataeditor, cls.autosort_checkbox], outputs=cls.dataeditor)
  31. AttributeError: 'Dataframe' object has no attribute 'input'
  32.  
  33. Running on local URL: http://127.0.0.1:7860
  34.  
  35. To create a public link, set `share=True` in `launch()`.
  36. Startup time: 13.2s (import torch: 1.3s, import gradio: 0.7s, import ldm: 0.3s, other imports: 1.2s, setup codeformer: 0.1s, load scripts: 1.1s, load SD checkpoint: 6.3s, create ui: 1.9s, gradio launch: 0.1s).
  37. 0%| | 0/60 [00:02<?, ?it/s]
  38. Error completing request
  39. Arguments: ('task(jr590o2u4zzrtle)', 'human, solo, male, on side, sitting, front view, masterpiece, (8K quality), no watermark, no signature,', 'bwu, dfc, ubbp, updn, easynegative, beard', [], 60, 14, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, 0, False, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002439E1FC400>, '', None, ['artist', 'character', 'species', 'general'], '', 'Reset form', 'Generate', False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, '', '', 0, None, None, False, 50, False, False, 'Euler a', 0.95, 0.75, '0.75:0.95:5', '0.2:0.8:5', 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {}
  40. Traceback (most recent call last):
  41. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\call_queue.py", line 56, in f
  42. res = list(func(*args, **kwargs))
  43. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
  44. res = func(*args, **kwargs)
  45. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\txt2img.py", line 56, in txt2img
  46. processed = process_images(p)
  47. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\processing.py", line 503, in process_images
  48. res = process_images_inner(p)
  49. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
  50. return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  51. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\processing.py", line 653, in process_images_inner
  52. samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  53. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\processing.py", line 869, in sample
  54. samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  55. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 358, in sample
  56. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  57. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling
  58. return func()
  59. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 358, in <lambda>
  60. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  61. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
  62. return func(*args, **kwargs)
  63. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 523, in sample_dpmpp_2s_ancestral
  64. denoised = model(x, sigmas[i] * s_in, **extra_args)
  65. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  66. return forward_call(*input, **kwargs)
  67. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 145, in forward
  68. x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  69. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  70. return forward_call(*input, **kwargs)
  71. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
  72. eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  73. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
  74. return self.inner_model.apply_model(*args, **kwargs)
  75. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
  76. setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  77. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
  78. return self.__orig_func(*args, **kwargs)
  79. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
  80. x_recon = self.model(x_noisy, t, **cond)
  81. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  82. return forward_call(*input, **kwargs)
  83. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
  84. out = self.diffusion_model(x, t, context=cc)
  85. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  86. return forward_call(*input, **kwargs)
  87. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
  88. h = module(h, emb, context)
  89. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  90. return forward_call(*input, **kwargs)
  91. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
  92. x = layer(x, context)
  93. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  94. return forward_call(*input, **kwargs)
  95. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
  96. x = block(x, context=context[i])
  97. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  98. return forward_call(*input, **kwargs)
  99. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
  100. return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  101. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
  102. return CheckpointFunction.apply(func, len(inputs), *args)
  103. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
  104. output_tensors = ctx.run_function(*ctx.input_tensors)
  105. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
  106. x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  107. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
  108. return forward_call(*input, **kwargs)
  109. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 245, in split_cross_attention_forward_invokeAI
  110. r = einsum_op(q, k, v)
  111. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 220, in einsum_op
  112. return einsum_op_dml(q, k, v)
  113. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 208, in einsum_op_dml
  114. return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)
  115. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 189, in einsum_op_tensor_mem
  116. return einsum_op_compvis(q, k, v)
  117. File "C:\Users\Karak\Documents\ai\art\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 154, in einsum_op_compvis
  118. s = s.softmax(dim=-1, dtype=s.dtype)
  119. RuntimeError: Could not allocate tensor with 1207959552 bytes. There is not enough GPU video memory available!
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement