Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- * Install wsl under windows 11 and ubuntu
- * Update ubuntu
- sudo apt update
- sudo apt upgrade
- * Close and reopen terminal window
- * Make a place for downloads
- mkdir Downloads
- cd Downloads
- * Download & Install Anaconda
- wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
- chmod +x ./Anaconda3-2022.05-Linux-x86_64.sh
- ./Anaconda3-2022.05-Linux-x86_64.sh
- * Accept license and defaults until
- Do you wish the installer to initialize Anaconda3
- by running conda init? [yes|no]
- [no] >>> yes
- * close and reopen terminal window
- * Nvidia CUDA toolkit for Ubuntu WSL - https://developer.nvidia.com/cuda-downloads
- wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
- sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
- wget https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda-repo-wsl-ubuntu-11-3-local_11.3.1-1_amd64.deb
- sudo dpkg -i cuda-repo-wsl-ubuntu-11-3-local_11.3.1-1_amd64.deb
- sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-3-local/7fa2af80.pub
- sudo apt-get update
- sudo apt-get -y install cuda
- * Install g++
- sudo apt install build-essential
- * New virtual Python environment
- conda update -n base -c defaults conda
- conda create --name diffusers python=3.9
- conda activate diffusers
- * Make a directory for all your github downloads, then download diffusers
- mkdir ~/github
- cd ~/github
- git clone https://github.com/Ttl/diffusers.git
- cd diffusers
- git checkout dreambooth_deepspeed
- git pull
- * Install required packages
- pip install .
- cd examples/dreambooth
- conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
- pip install -r requirements.txt
- pip install -U --pre triton
- pip install ninja bitsandbytes
- pip install git+https://github.com/facebookresearch/xformers@1d31a3a#egg=xformers
- pip install deepspeed
- pip install diffusers
- * Configure / login
- accelerate config
- * Use following options
- In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
- Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): 0
- Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:
- Do you want to use DeepSpeed? [yes/NO]: yes
- Do you want to specify a json file to a DeepSpeed config? [yes/NO]:
- What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]:
- Where to offload optimizer states? [none/cpu/nvme]: cpu
- Where to offload parameters? [none/cpu/nvme]: cpu
- How many gradient accumulation steps you're passing in your script? [1]:
- Do you want to use gradient clipping? [yes/NO]:
- Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]:
- How many GPU(s) should be used for distributed training? [1]:
- Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: fp16
- huggingface-cli login
- * And paste token
- mkdir -p training
- mkdir -p classes
- mkdir -p model
- export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
- explorer.exe .
- * Copy your class images(person?) to the classes folder. Generate with whatever SD gui you're using
- Generate the amount you'll use in --num_class_images (200?) or you'll run out of vram when the script tries to generate them
- * Copy your training images(you?) to the training folder
- * Copy your training file from your Windows desktop (or wherever you saved it):
- cp /mnt/c/Users/YOUR_USERNAME/Desktop/my_training.sh ./
- chmod +x ./my_training.sh
- ./my_training.sh
- *Example of contents of my_training.sh
- export MODEL_NAME="CompVis/stable-diffusion-v1-4"
- export INSTANCE_DIR="training"
- export CLASS_DIR="classes"
- export OUTPUT_DIR="model"
- accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks person" \
- --class_prompt="a photo of person" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --mixed_precision=fp16
- * Convert to .ckpt
- wget https://gist.githubusercontent.com/jachiam/8a5c0b607e38fcc585168b90c686eb05/raw/cd2f93b1a0487006b0eb4f430b00e89aa84228ae/convert_diffusers_to_sd.py
- python convert_diffusers_to_sd.py --model_path model --checkpoint_path model.ckpt
- * NB: If you get an error like this -
- 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
- Then:
- Make sure you've visited https://huggingface.co/CompVis/stable-diffusion-v1-4 and accepted the licence!
- NB. If this is the very first time you've used diffusers, it's about a 6GB download
- To train again next time, simply:
- * Start Ubuntu, then:
- conda activate diffusers
- cd ~/github/diffusers/examples/dreambooth
- ./my_training.sh
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement