Advertisement
Guest User

Untitled

a guest
Feb 7th, 2024
102
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.09 KB | None | 0 0
  1.  
  2. J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge>git pull
  3. Already up to date.
  4. venv "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
  5. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
  6. Version: f0.0.10-latest-61-g65f9c7d4
  7. Commit hash: 65f9c7d442c0e1e1dafaee9da1df587a48b742d0
  8. Launching Web UI with arguments: --xformers
  9. Total VRAM 3072 MB, total RAM 16336 MB
  10. Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
  11. WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
  12. Error caught was: No module named 'triton'
  13. xformers version: 0.0.23.post1
  14. Set vram state to: LOW_VRAM
  15. Device: cuda:0 NVIDIA GeForce GTX 1060 3GB : native
  16. VAE dtype: torch.float32
  17. Using xformers cross attention
  18. ControlNet preprocessor location: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\ControlNetPreprocessor
  19. Loading weights [67ab2fd8ec] from J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\Stable-diffusion\v6.safetensors
  20. 2024-02-07 03:53:26,393 - ControlNet - INFO - ControlNet UI callback registered.
  21. Running on local URL: http://127.0.0.1:7860
  22.  
  23. To create a public link, set `share=True` in `launch()`.
  24. model_type EPS
  25. UNet ADM Dimension 2816
  26. Startup time: 26.2s (initial startup: 0.1s, prepare environment: 8.3s, import torch: 7.8s, import gradio: 2.1s, setup paths: 1.2s, initialize shared: 0.2s, other imports: 1.9s, load scripts: 2.5s, create ui: 1.3s, gradio launch: 0.9s).
  27. Using xformers attention in VAE
  28. Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
  29. Using xformers attention in VAE
  30. extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
  31. Loading VAE weights specified in settings: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
  32. To load target model SDXLClipModel
  33. Begin to load 1 model
  34. Model loaded in 108.1s (load weights from disk: 1.9s, forge load real models: 92.2s, forge set components: 0.1s, forge finalize: 4.1s, load VAE: 1.7s, load textual inversion embeddings: 0.2s, calculate empty prompt: 8.0s).
  35. To load target model SDXL
  36. Begin to load 1 model
  37. loading in lowvram mode 788.1377696990967
  38. Moving model(s) has taken 0.38 seconds
  39. 0%| | 0/20 [00:09<?, ?it/s]
  40. *** Error completing request | 0/1000 [00:00<?, ?it/s]
  41. *** Arguments: ('task(px7biigjrakuab1)', <gradio.routes.Request object at 0x000001F8FCD1B9D0>, 'TEST', 'NEGPROMPT TEST', [], 20, 'Euler a', 50, 1, 6.5, 1024, 704, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
  42. Traceback (most recent call last):
  43. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
  44. res = list(func(*args, **kwargs))
  45. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
  46. res = func(*args, **kwargs)
  47. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
  48. processed = processing.process_images(p)
  49. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
  50. res = process_images_inner(p)
  51. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
  52. samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  53. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
  54. samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  55. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
  56. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  57. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 260, in launch_sampling
  58. return func()
  59. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
  60. samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  61. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
  62. return func(*args, **kwargs)
  63. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
  64. d = to_d(x, sigmas[i], denoised)
  65. File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
  66. return (x - denoised) / utils.append_dims(sigma, x.ndim)
  67. RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
  68.  
  69. ---
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement