Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- * Install wsl2 under windows 11 22h2 and then install ubuntu
- * Update ubuntu
- sudo apt update
- sudo apt upgrade
- * Close and reopen terminal window
- * Make a place for downloads
- cd ~
- mkdir Downloads
- cd Downloads
- * Download & Install Anaconda
- wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
- chmod +x ./Anaconda3-2022.05-Linux-x86_64.sh
- ./Anaconda3-2022.05-Linux-x86_64.sh
- * Accept license and defaults until
- Do you wish the installer to initialize Anaconda3
- by running conda init? [yes|no]
- [no] >>> yes
- * close and reopen terminal window
- * Update anaconda
- conda update -n base -c defaults conda
- * Nvidia CUDA toolkit for Ubuntu WSL
- cd ~/Downloads
- wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
- sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
- wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
- sudo dpkg -i cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
- sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-6-local/7fa2af80.pub
- sudo apt-get update
- sudo apt-get -y install cuda
- * New virtual Python environment
- conda create --name diffusers python=3.10
- conda activate diffusers
- * Make a directory for all your github downloads, then download diffusers
- mkdir ~/github
- cd ~/github
- git clone https://github.com/ShivamShrirao/diffusers/
- cd diffusers
- * Install required packages
- pip install .
- cd examples/dreambooth
- pip install -r requirements.txt
- pip install torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
- pip install diffusers
- pip install deepspeed
- * Configure / login
- accelerate config
- * Use following options
- In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
- Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): 0
- Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:
- Do you want to use DeepSpeed? [yes/NO]: yes
- Do you want to specify a json file to a DeepSpeed config? [yes/NO]:
- What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]:
- Where to offload optimizer states? [none/cpu/nvme]: cpu
- Where to offload parameters? [none/cpu/nvme]: cpu
- How many gradient accumulation steps you're passing in your script? [1]:
- Do you want to use gradient clipping? [yes/NO]:
- Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]:
- How many GPU(s) should be used for distributed training? [1]:
- Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: fp16
- huggingface-cli login
- * And paste token
- mkdir -p training
- mkdir -p classes
- mkdir -p model
- *Enter this so you won't have to restart the computer
- export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
- explorer.exe .
- * Copy your class images(person?) to the classes folder. Generate with whatever SD gui you're using with the class i.e. person as the prompt, or let the script generate
- * Generate the amount you'll use in --num_class_images (200?) if you run out of vram or if you want the script to run faster
- * Copy your training images(you?) to the training folder
- * Copy your training file from your Windows desktop (or wherever you saved it):
- cp /mnt/c/Users/YOUR_USERNAME/Desktop/my_training.sh ./
- chmod +x ./my_training.sh
- ./my_training.sh
- *Example of contents of my_training.sh
- export MODEL_NAME="CompVis/stable-diffusion-v1-4"
- export INSTANCE_DIR="training"
- export CLASS_DIR="classes"
- export OUTPUT_DIR="model"
- accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks person" \
- --class_prompt="a photo of person" \
- --seed=1337 \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --sample_batch_size=1 \
- --max_train_steps=1000 \
- --mixed_precision=fp16
- * Convert to .ckpt
- python -m pip install pytorch-lightning
- explorer.exe .
- ***Put original sd-v1-4.ckpt in the dreambooth folder - Important
- *And donwload and run a different script
- wget https://raw.githubusercontent.com/ratwithacompiler/diffusers_stablediff_conversion/main/convert_diffusers_to_sd.py
- python ./convert_diffusers_to_sd.py ./model ./sd-v1-4.ckpt ./model.ckpt
- * NB: If you get an error like this -
- 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
- Then:
- Make sure you've visited https://huggingface.co/CompVis/stable-diffusion-v1-4 and accepted the licence!
- NB. If this is the very first time you've used diffusers, it's about a 6GB download
- To train again next time, simply:
- * Start Ubuntu, then:
- conda activate diffusers
- cd ~/github/diffusers/examples/dreambooth
- rm -r model
- ./my_training.sh
Advertisement
Comments
-
- I follow all the steps, I had no errors until the last step of the line 148 and get:
- (diffusers) napalm@Windows10-250GB:~/github/diffusers/examples/dreambooth$ python ./convert_diffusers_to_sd.py ./model ./sd-v1-4.ckpt ./model.ckpt
- [!] Not using xformers memory efficient attention.
- loading diff model from './model'
- Traceback (most recent call last):
- File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 765, in <module>
- setup()
- File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 761, in setup
- convert_diff_to_sd(args.diffusers_model, args.base_ckpt_path, args.output_ckpt_path,
- File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 723, in convert_diff_to_sd
- diff_pipe = StableDiffusionPipeline.from_pretrained(diffusers_model_path,
- File "/home/napalm/anaconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 479, in from_pretrained
- config_dict = cls.load_config(cached_folder)
- File "/home/napalm/anaconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 312, in load_config
- raise EnvironmentError(
- OSError: Error no file named model_index.json found in directory ./model.
- (diffusers) napalm@Windows10-250GB:~/github/diffusers/examples/dreambooth$
-
- I was with you until "Generate with whatever SD gui you're using with the class i.e. person as the prompt, or let the script generate"
- Any examples of what that means?
-
- To better be able to differentiate between your training subject and some random object of the same class (let's assume it's "person"), you need to provide some example images that are clearly a person, but not your subject. This way you can avoid overtraining (for the most part, I think). What this line suggests is to pre-generate the files so that you don't have to generate new ones each time.
Add Comment
Please, Sign In to add comment
Advertisement