Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- == https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth ==
- * Make a place for downloads
- mkdir Downloads
- cd Downloads
- * Download & Install Anaconda
- wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
- chmod +x ./Anaconda3-2022.05-Linux-x86_64.sh
- ./Anaconda3-2022.05-Linux-x86_64.sh
- * Nvidia CUDA toolkit for Ubuntu WSL - https://developer.nvidia.com/cuda-downloads
- wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
- sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
- wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda-repo-wsl-ubuntu-11-7-local_11.7.1-1_amd64.deb
- sudo dpkg -i cuda-repo-wsl-ubuntu-11-7-local_11.7.1-1_amd64.deb
- sudo cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/
- sudo apt-get update
- sudo apt-get -y install cuda
- * New shell with Anaconda & install g++
- bash
- sudo apt install build-essential
- * New virtual Python environment
- conda update -n base -c defaults conda
- conda create --name diffusers python=3.9
- conda activate diffusers
- * Make a directory for all your github downloads, then download diffusers
- mkdir ~/github
- cd ~/github
- git clone https://github.com/ShivamShrirao/diffusers.git
- cd diffusers
- * Install required packages
- pip install .
- cd examples/dreambooth
- pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
- pip install -r requirements.txt
- pip install -U --pre triton
- pip install ninja bitsandbytes
- pip install git+https://github.com/facebookresearch/xformers@1d31a3a#egg=xformers
- * Configure / login
- accelerate config
- huggingface-cli login
- mkdir -p training
- mkdir -p classes
- explorer.exe .
- * Edit your training file and add this line to the top or reboot:
- export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
- * Copy your training file from your Windows desktop (or wherever you saved it):
- cp /mnt/c/Users/YOUR_USERNAME/Desktop/my_training.sh ./
- chmod +x ./my_training.sh
- ./my_training.sh
- * NB: If you get an error like this -
- 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
- Then:
- Make sure you've visited https://huggingface.co/CompVis/stable-diffusion-v1-4 and accepted the licence!
- NB. If this is the very first time you've used diffusers, it's about a 6GB download
- To train again next time, simply:
- * Start Ubuntu, then:
- conda activate diffusers
- cd ~/github/diffusers/examples/dreambooth
- ./my_training.sh
Advertisement
Comments
-
- If someone has problems with CUDA not being detected, just be sure to add "export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH" to the train file.
-
- I had to make some changes to get this to work for me:
- * Install required packages (updated for 11/02/2022)
- pip install .
- cd examples/dreambooth
- conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
- pip install -r requirements.txt
- pip install -U --pre triton
- pip install ninja bitsandbytes
- conda install xformers -c xformers/label/dev
- I changed the torch install to use 0.12 (the OG pastebin installs the latest 0.13, which did not work for me). I also changed xformers to grab the latest version.
- * Training file (running with SD 1.5)
- 1. Get your training script from here (https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth). I used the 8gb one and followed the instructions for configuring DeepSpeed
- 2. make sure you save your training script with LF line separators and not CRLF
- 3. Follow the instructions from this step to dump your 1.5 model (https://github.com/ShivamShrirao/diffusers/issues/50#issuecomment-1294854643).
- 4. Update the training script for the new model. It should look something like:
- export MODEL_DIR="/home/arrow/github/diffusers/models/dump"
- export VAE_DIR="/home/arrow/github/diffusers/models/dump/vae"
- export INSTANCE_DIR="training"
- export CLASS_DIR="classes"
- export OUTPUT_DIR="out"
- accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --pretrained_vae_name_or_path=$VAE_DIR \
- ...
Add Comment
Please, Sign In to add comment
Advertisement