Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Install Ollama service. https://ollama.com
- $ sudo pacman -Syuu
- $ sudo pacman -S ollama
- # Install docker, dockge. https://github.com/louislam/dockge
- $ sudo pacman -S docker
- $ mkdir -p /opt/stacks /opt/dockge
- $ cd /opt/dockge
- $ curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml
- $ docker compose up -d
- # Dockge is now running on http://localhost:5001
- # Spin up Open-WebUI
- # In dockge, there is a spot to enter a docker run command, and convert it to compose. You can copy that into the (+Compose) area to spin things up pretty quickly. Theres 3 options for Open-WebUI here, more are on the git. https://github.com/open-webui/open-webui
- # If Ollama is on your computer, use this command:
- $ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- # If Ollama is on a Different Server, use this command:
- $ docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- # To run Open WebUI with Nvidia GPU support, use this command:
- $ docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
- # For image generation, install ComfyUI. https://github.com/comfyanonymous/ComfyUI
- # Create a venv for ComfyUI
- $ python3 -m venv --system-site-packages create ~/venv/ComfyUI
- $ source ~/venv/ComfyUI/bin/activate
- # NVIDIA, install stable pytorch
- $ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124
- # Clone the git
- $ cd ~/venv/ComfyUI/
- $ git clone https://github.com/comfyanonymous/ComfyUI.git
- $ cd ComfyUI
- $ pip install -r requirements.txt
- # Install ComfyUI manager https://github.com/ltdrdata/ComfyUI-Manager
- $ cd custom_nodes
- $ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
- # Restart ComfyUI if you had it open already. Launch ComfyUI, now with the manager installed.
- $ python main.py
- # Configure Open-WebUI for Ollama integration.
- # Settings -> Admin Settings -> Connections -> http://host.docker.internal:11434 (11343 being the docker port)
- # Heres the docker compose.yaml for my openweb-ui and my comfyui is running off machine on 192.168.1.3
- version: "3"
- services:
- open-webui2:
- image: ghcr.io/open-webui/open-webui:main
- ports:
- - 3031:8080
- deploy:
- resources:
- reservations:
- devices:
- - driver: nvidia
- count: all
- capabilities:
- - gpu
- extra_hosts:
- - host.docker.internal:host-gateway
- volumes:
- - ./open-webui2:/app/backend/data
- container_name: open-webui2
- restart: unless-stopped
- environment:
- - COMFYUI_BASE_URL=http://192.168.x.x:8188/
- - ENABLE_IMAGE_GENERATION=true
- networks: {}
- # Configuring OpenWeb-UI for ComfyUI integration is a little less straightforward. You have to save a workflow as API, upload that in the OpenWeb-UI->Settings->Admin Settings->Images area. Then, for each of the listed elements, gotta configure which node# is associated to which element. Google or ChatGPT can walk through this part.
- # I would suggest turning on the ability to let the LLM search the net, Click user -> Settings -> Admin Settings -> Web Search.
- # Ollama.com's blog is a good place to learn about new models. You can manually install them with $ ollama run modelname:parameters, or use the OpenWeb-UI model manager, Click user -> Settings -> Admin Settings -> Models.
- # Check out comfyworkflows.com and civitai.com for models and workflow ideas. huggingface.co is a good place to search for models once you learn about them, but I find browsing HF.co tedious without knowing which model to look for. With ComfyUI manager, you can install models via the ComfyUI interface -> Settings -> ComfyUI Manager -> Install via git. Huggingface.co has a git link on all of their models. Theres some models that require you to be logged in, so I use 'hfd-git' and sometimes 'python-huggingface-hub' to get my system to retain a login cookie and allow ComfyUI Manager to install the model. Youll also want to install 'git-lfs' to properly download larger models. My ComfyUI folder is now 438G with models, input images, output images, and other nodes/model types. A 1024x1024 image using Flux model takes approx 25 seconds. An entire workflow generation for image2image -> generate image -> use as input for flux model -> post process takes approx 100-291 seconds.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement