Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
- Version: f0.0.16v1.8.0rc-latest-238-g437c3489
- Commit hash: 437c348926c9ee1bfe1f147529f164bb93f731a1
- Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
- Launching Web UI with arguments: --listen --enable-insecure-extension-access --theme dark
- Total VRAM 8176 MB, total RAM 15834 MB
- Set vram state to: NORMAL_VRAM
- Device: cuda:0 AMD Radeon RX 6600M : native
- VAE dtype: torch.float32
- CUDA Stream Activated: False
- Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
- ControlNet preprocessor location: /home/user/stable-diffusion-webui-forge/models/ControlNetPreprocessor
- Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
- [-] ADetailer initialized. version: 24.1.2, num models: 9
- sd-webui-prompt-all-in-one background API service started successfully.
- Loading weights [821aa5537f] from /home/user/stable-diffusion-webui-forge/models/Stable-diffusion/autismmixSDXL_autismmixPony.safetensors
- 2024-02-25 19:59:18,853 - ControlNet - INFO - ControlNet UI callback registered.
- Running on local URL: http://0.0.0.0:7860
- model_type EPS
- UNet ADM Dimension 2816
- To create a public link, set `share=True` in `launch()`.
- Startup time: 16.9s (prepare environment: 3.1s, import torch: 3.7s, import gradio: 0.8s, setup paths: 1.0s, other imports: 0.6s, load scripts: 2.9s, create ui: 0.8s, gradio launch: 2.5s, app_started_callback: 1.4s).
- Using split attention in VAE
- Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
- Using split attention in VAE
- extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
- To load target model SDXLClipModel
- Begin to load 1 model
- [Memory Management] Current Free GPU Memory (MB) = 7995.99609375
- [Memory Management] Model Memory (MB) = 2144.3546981811523
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 4827.641395568848
- Moving model(s) has taken 0.54 seconds
- Model loaded in 9.7s (load weights from disk: 1.2s, forge load real models: 6.5s, calculate empty prompt: 2.0s).
- NeverOOM Enabled for VAE (always tiled)
- To load target model SDXLClipModel
- Begin to load 1 model
- Reuse 1 loaded models
- [Memory Management] Current Free GPU Memory (MB) = 6122.0380859375
- [Memory Management] Model Memory (MB) = 0.0
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 5098.0380859375
- Moving model(s) has taken 0.01 seconds
- To load target model SDXL
- Begin to load 1 model
- [Memory Management] Current Free GPU Memory (MB) = 7879.21923828125
- [Memory Management] Model Memory (MB) = 4897.086494445801
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 1958.1327438354492
- Moving model(s) has taken 2.65 seconds
- 100%|███████████████████████████████████████████| 25/25 [01:07<00:00, 2.72s/it]
- To load target model AutoencoderKL██████████████| 25/25 [01:04<00:00, 2.63s/it]
- Begin to load 1 model
- [Memory Management] Current Free GPU Memory (MB) = 2873.66943359375
- [Memory Management] Model Memory (MB) = 319.11416244506836
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 1530.5552711486816
- Moving model(s) has taken 0.30 seconds
- VAE tiled decode: 100%|█████████████████████████| 48/48 [00:08<00:00, 5.78it/s]
- Total progress: 100%|███████████████████████████| 25/25 [01:13<00:00, 2.94s/it]
- NeverOOM Enabled for VAE (always tiled)█████████| 25/25 [01:13<00:00, 2.63s/it]
- Memory cleanup has taken 0.27 seconds
- 100%|███████████████████████████████████████████| 25/25 [01:05<00:00, 2.62s/it]
- To load target model AutoencoderKL██████████████| 25/25 [01:02<00:00, 2.62s/it]
- Begin to load 1 model
- [Memory Management] Current Free GPU Memory (MB) = 2851.3671875
- [Memory Management] Model Memory (MB) = 319.11416244506836
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 1508.2530250549316
- Moving model(s) has taken 0.06 seconds
- VAE tiled decode: 100%|█████████████████████████| 48/48 [00:08<00:00, 5.91it/s]
- Total progress: 100%|███████████████████████████| 25/25 [01:11<00:00, 2.86s/it]
- NeverOOM Enabled for VAE (always tiled)█████████| 25/25 [01:11<00:00, 2.62s/it]
- Memory cleanup has taken 0.13 seconds
- 100%|███████████████████████████████████████████| 25/25 [01:05<00:00, 2.60s/it]
- To load target model AutoencoderKL██████████████| 25/25 [01:02<00:00, 2.60s/it]
- Begin to load 1 model
- [Memory Management] Current Free GPU Memory (MB) = 2851.06494140625
- [Memory Management] Model Memory (MB) = 319.11416244506836
- [Memory Management] Minimal Inference Memory (MB) = 1024.0
- [Memory Management] Estimated Remaining GPU Memory (MB) = 1507.9507789611816
- Moving model(s) has taken 0.07 seconds
- VAE tiled decode: 100%|█████████████████████████| 48/48 [00:08<00:00, 5.95it/s]
- Total progress: 100%|███████████████████████████| 25/25 [01:11<00:00, 2.84s/it]
- NeverOOM Enabled for VAE (always tiled)█████████| 25/25 [01:11<00:00, 2.60s/it]
- Memory cleanup has taken 0.12 seconds
- 44%|██████████████████▉ | 11/25 [00:28<00:36, 2.60s/it]Memory access fault by GPU node-1 (Agent handle: 0x55991a16a710) on address 0x7f9644e39000. Reason: Page not present or supervisor privilege.
- launch.sh: line 9: 10187 Aborted (core dumped) python3 launch.py --listen --enable-insecure-extension-access --theme dark
Advertisement
Add Comment
Please, Sign In to add comment