Advertisement
Guest User

Untitled

a guest
Oct 9th, 2022
3,400
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.07 KB | None | 0 0
  1. * Install wsl under windows 11 and ubuntu
  2.  
  3. * Update ubuntu
  4.  
  5. sudo apt update
  6. sudo apt upgrade
  7.  
  8. * Close and reopen terminal window
  9.  
  10. * Make a place for downloads
  11. mkdir Downloads
  12. cd Downloads
  13.  
  14. * Download & Install Anaconda
  15.  
  16. wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
  17. chmod +x ./Anaconda3-2022.05-Linux-x86_64.sh
  18. ./Anaconda3-2022.05-Linux-x86_64.sh
  19.  
  20. * Accept license and defaults until
  21.  
  22. Do you wish the installer to initialize Anaconda3
  23. by running conda init? [yes|no]
  24. [no] >>> yes
  25.  
  26. * close and reopen terminal window
  27.  
  28.  
  29. * Nvidia CUDA toolkit for Ubuntu WSL - https://developer.nvidia.com/cuda-downloads
  30. wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
  31. sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
  32. wget https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda-repo-wsl-ubuntu-11-3-local_11.3.1-1_amd64.deb
  33. sudo dpkg -i cuda-repo-wsl-ubuntu-11-3-local_11.3.1-1_amd64.deb
  34. sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-3-local/7fa2af80.pub
  35. sudo apt-get update
  36. sudo apt-get -y install cuda
  37.  
  38.  
  39. * Install g++
  40. sudo apt install build-essential
  41.  
  42. * New virtual Python environment
  43. conda update -n base -c defaults conda
  44. conda create --name diffusers python=3.9
  45. conda activate diffusers
  46.  
  47. * Make a directory for all your github downloads, then download diffusers
  48. mkdir ~/github
  49. cd ~/github
  50. git clone https://github.com/Ttl/diffusers.git
  51. cd diffusers
  52. git checkout dreambooth_deepspeed
  53. git pull
  54.  
  55. * Install required packages
  56. pip install .
  57. cd examples/dreambooth
  58. conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
  59. pip install -r requirements.txt
  60. pip install -U --pre triton
  61. pip install ninja bitsandbytes
  62. pip install git+https://github.com/facebookresearch/xformers@1d31a3a#egg=xformers
  63. pip install deepspeed
  64. pip install diffusers
  65.  
  66.  
  67. * Configure / login
  68.  
  69. accelerate config
  70.  
  71. * Use following options
  72.  
  73. In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
  74. Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): 0
  75. Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:
  76. Do you want to use DeepSpeed? [yes/NO]: yes
  77. Do you want to specify a json file to a DeepSpeed config? [yes/NO]:
  78. What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]:
  79. Where to offload optimizer states? [none/cpu/nvme]: cpu
  80. Where to offload parameters? [none/cpu/nvme]: cpu
  81. How many gradient accumulation steps you're passing in your script? [1]:
  82. Do you want to use gradient clipping? [yes/NO]:
  83. Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]:
  84. How many GPU(s) should be used for distributed training? [1]:
  85. Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: fp16
  86.  
  87. huggingface-cli login
  88.  
  89. * And paste token
  90.  
  91. mkdir -p training
  92. mkdir -p classes
  93. mkdir -p model
  94. export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
  95.  
  96.  
  97. explorer.exe .
  98.  
  99. * Copy your class images(person?) to the classes folder. Generate with whatever SD gui you're using
  100. Generate the amount you'll use in --num_class_images (200?) or you'll run out of vram when the script tries to generate them
  101. * Copy your training images(you?) to the training folder
  102.  
  103. * Copy your training file from your Windows desktop (or wherever you saved it):
  104. cp /mnt/c/Users/YOUR_USERNAME/Desktop/my_training.sh ./
  105. chmod +x ./my_training.sh
  106. ./my_training.sh
  107.  
  108. *Example of contents of my_training.sh
  109.  
  110. export MODEL_NAME="CompVis/stable-diffusion-v1-4"
  111. export INSTANCE_DIR="training"
  112. export CLASS_DIR="classes"
  113. export OUTPUT_DIR="model"
  114.  
  115. accelerate launch train_dreambooth.py \
  116. --pretrained_model_name_or_path=$MODEL_NAME \
  117. --instance_data_dir=$INSTANCE_DIR \
  118. --class_data_dir=$CLASS_DIR \
  119. --output_dir=$OUTPUT_DIR \
  120. --with_prior_preservation --prior_loss_weight=1.0 \
  121. --instance_prompt="a photo of sks person" \
  122. --class_prompt="a photo of person" \
  123. --resolution=512 \
  124. --train_batch_size=1 \
  125. --gradient_accumulation_steps=1 --gradient_checkpointing \
  126. --learning_rate=5e-6 \
  127. --lr_scheduler="constant" \
  128. --lr_warmup_steps=0 \
  129. --num_class_images=200 \
  130. --max_train_steps=800 \
  131. --mixed_precision=fp16
  132.  
  133.  
  134. * Convert to .ckpt
  135.  
  136. wget https://gist.githubusercontent.com/jachiam/8a5c0b607e38fcc585168b90c686eb05/raw/cd2f93b1a0487006b0eb4f430b00e89aa84228ae/convert_diffusers_to_sd.py
  137.  
  138. python convert_diffusers_to_sd.py --model_path model --checkpoint_path model.ckpt
  139.  
  140.  
  141. * NB: If you get an error like this -
  142. 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
  143.  
  144. Then:
  145. Make sure you've visited https://huggingface.co/CompVis/stable-diffusion-v1-4 and accepted the licence!
  146. NB. If this is the very first time you've used diffusers, it's about a 6GB download
  147.  
  148. To train again next time, simply:
  149.  
  150. * Start Ubuntu, then:
  151. conda activate diffusers
  152. cd ~/github/diffusers/examples/dreambooth
  153. ./my_training.sh
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement