Advertisement
just1morething

Ollama-OpenWebUI-ComfyUI-Dockge-Bootstrap

Nov 20th, 2024
2,095
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 4.60 KB | Source Code | 0 0
  1. # Install Ollama service. https://ollama.com
  2. $ sudo pacman -Syuu
  3. $ sudo pacman -S ollama
  4.  
  5. # Install docker, dockge. https://github.com/louislam/dockge
  6. $ sudo pacman -S docker
  7. $ mkdir -p /opt/stacks /opt/dockge
  8. $ cd /opt/dockge
  9. $ curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml
  10. $ docker compose up -d
  11. # Dockge is now running on http://localhost:5001
  12.  
  13. # Spin up Open-WebUI
  14. # In dockge, there is a spot to enter a docker run command, and convert it to compose. You can copy that into the (+Compose) area to spin things up pretty quickly. Theres 3 options for Open-WebUI here, more are on the git. https://github.com/open-webui/open-webui
  15. # If Ollama is on your computer, use this command:
  16. $ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  17. # If Ollama is on a Different Server, use this command:
  18. $ docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  19. # To run Open WebUI with Nvidia GPU support, use this command:
  20. $ docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
  21.  
  22. # For image generation, install ComfyUI. https://github.com/comfyanonymous/ComfyUI
  23. # Create a venv for ComfyUI
  24. $ python3 -m venv --system-site-packages create ~/venv/ComfyUI
  25. $ source ~/venv/ComfyUI/bin/activate
  26. # NVIDIA, install stable pytorch
  27. $ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124
  28. # Clone the git
  29. $ cd ~/venv/ComfyUI/
  30. $ git clone https://github.com/comfyanonymous/ComfyUI.git
  31. $ cd ComfyUI
  32. $ pip install -r requirements.txt
  33. # Install ComfyUI manager https://github.com/ltdrdata/ComfyUI-Manager
  34. $ cd custom_nodes
  35. $ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
  36. # Restart ComfyUI if you had it open already. Launch ComfyUI, now with the manager installed.
  37. $ python main.py
  38.  
  39. # Configure Open-WebUI for Ollama integration.
  40. # Settings -> Admin Settings -> Connections -> http://host.docker.internal:11434 (11343 being the docker port)
  41. # Heres the docker compose.yaml for my openweb-ui and my comfyui is running off machine on 192.168.1.3
  42. version: "3"
  43. services:
  44.   open-webui2:
  45.     image: ghcr.io/open-webui/open-webui:main
  46.     ports:
  47.       - 3031:8080
  48.     deploy:
  49.       resources:
  50.         reservations:
  51.           devices:
  52.             - driver: nvidia
  53.               count: all
  54.               capabilities:
  55.                 - gpu
  56.     extra_hosts:
  57.       - host.docker.internal:host-gateway
  58.     volumes:
  59.       - ./open-webui2:/app/backend/data
  60.     container_name: open-webui2
  61.     restart: unless-stopped
  62.     environment:
  63.       - COMFYUI_BASE_URL=http://192.168.x.x:8188/
  64.       - ENABLE_IMAGE_GENERATION=true
  65. networks: {}
  66.  
  67. # Configuring OpenWeb-UI for ComfyUI integration is a little less straightforward. You have to save  a workflow as API, upload that in the OpenWeb-UI->Settings->Admin Settings->Images area. Then, for each of the listed elements, gotta configure which node# is associated to which element. Google or ChatGPT can walk through this part.
  68. # I would suggest turning on the ability to let the LLM search the net, Click user -> Settings -> Admin Settings -> Web Search.
  69. # Ollama.com's blog is a good place to learn about new models. You can manually install them with $ ollama run modelname:parameters, or use the OpenWeb-UI model manager, Click user -> Settings -> Admin Settings -> Models.
  70. # Check out comfyworkflows.com and civitai.com for models and workflow ideas. huggingface.co is a good place to search for models once you learn about them, but I find browsing HF.co tedious without knowing which model to look for. With ComfyUI manager, you can install models via the ComfyUI interface -> Settings -> ComfyUI Manager -> Install via git. Huggingface.co has a git link on all of their models. Theres some models that require you to be logged in, so I use 'hfd-git' and sometimes 'python-huggingface-hub' to get my system to retain a login cookie and allow ComfyUI Manager to install the model. Youll also want to install 'git-lfs' to properly download larger models. My ComfyUI folder is now 438G with models, input images, output images, and other nodes/model types. A 1024x1024 image using Flux model takes approx 25 seconds. An entire workflow generation for image2image -> generate image -> use as input for flux model -> post process takes approx 100-291 seconds.
  71.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement