Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge>git pull
- Already up to date.
- venv "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
- Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
- Version: f0.0.10-latest-61-g65f9c7d4
- Commit hash: 65f9c7d442c0e1e1dafaee9da1df587a48b742d0
- Launching Web UI with arguments: --xformers
- Total VRAM 3072 MB, total RAM 16336 MB
- Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
- WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
- Error caught was: No module named 'triton'
- xformers version: 0.0.23.post1
- Set vram state to: LOW_VRAM
- Device: cuda:0 NVIDIA GeForce GTX 1060 3GB : native
- VAE dtype: torch.float32
- Using xformers cross attention
- ControlNet preprocessor location: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\ControlNetPreprocessor
- Loading weights [67ab2fd8ec] from J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\Stable-diffusion\v6.safetensors
- 2024-02-07 03:53:26,393 - ControlNet - INFO - ControlNet UI callback registered.
- Running on local URL: http://127.0.0.1:7860
- To create a public link, set `share=True` in `launch()`.
- model_type EPS
- UNet ADM Dimension 2816
- Startup time: 26.2s (initial startup: 0.1s, prepare environment: 8.3s, import torch: 7.8s, import gradio: 2.1s, setup paths: 1.2s, initialize shared: 0.2s, other imports: 1.9s, load scripts: 2.5s, create ui: 1.3s, gradio launch: 0.9s).
- Using xformers attention in VAE
- Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
- Using xformers attention in VAE
- extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
- Loading VAE weights specified in settings: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
- To load target model SDXLClipModel
- Begin to load 1 model
- Model loaded in 108.1s (load weights from disk: 1.9s, forge load real models: 92.2s, forge set components: 0.1s, forge finalize: 4.1s, load VAE: 1.7s, load textual inversion embeddings: 0.2s, calculate empty prompt: 8.0s).
- To load target model SDXL
- Begin to load 1 model
- loading in lowvram mode 788.1377696990967
- Moving model(s) has taken 0.38 seconds
- 0%| | 0/20 [00:09<?, ?it/s]
- *** Error completing request | 0/1000 [00:00<?, ?it/s]
- *** Arguments: ('task(px7biigjrakuab1)', <gradio.routes.Request object at 0x000001F8FCD1B9D0>, 'TEST', 'NEGPROMPT TEST', [], 20, 'Euler a', 50, 1, 6.5, 1024, 704, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
- Traceback (most recent call last):
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
- res = list(func(*args, **kwargs))
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
- res = func(*args, **kwargs)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
- processed = processing.process_images(p)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
- res = process_images_inner(p)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
- samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
- samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 260, in launch_sampling
- return func()
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
- samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
- return func(*args, **kwargs)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
- d = to_d(x, sigmas[i], denoised)
- File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
- return (x - denoised) / utils.append_dims(sigma, x.ndim)
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
- ---
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement