Advertisement
Guest User

Untitled

a guest
Oct 15th, 2022
8,906
0
Never
3
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.18 KB | None | 0 0
  1. * Install wsl2 under windows 11 22h2 and then install ubuntu
  2.  
  3. * Update ubuntu
  4.  
  5. sudo apt update
  6. sudo apt upgrade
  7.  
  8. * Close and reopen terminal window
  9.  
  10. * Make a place for downloads
  11. cd ~
  12. mkdir Downloads
  13. cd Downloads
  14.  
  15. * Download & Install Anaconda
  16.  
  17. wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
  18. chmod +x ./Anaconda3-2022.05-Linux-x86_64.sh
  19. ./Anaconda3-2022.05-Linux-x86_64.sh
  20.  
  21. * Accept license and defaults until
  22.  
  23. Do you wish the installer to initialize Anaconda3
  24. by running conda init? [yes|no]
  25. [no] >>> yes
  26.  
  27. * close and reopen terminal window
  28.  
  29. * Update anaconda
  30. conda update -n base -c defaults conda
  31.  
  32.  
  33. * Nvidia CUDA toolkit for Ubuntu WSL
  34.  
  35. cd ~/Downloads
  36. wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
  37. sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
  38. wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
  39. sudo dpkg -i cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
  40. sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-6-local/7fa2af80.pub
  41. sudo apt-get update
  42. sudo apt-get -y install cuda
  43.  
  44.  
  45. * New virtual Python environment
  46. conda create --name diffusers python=3.10
  47. conda activate diffusers
  48.  
  49. * Make a directory for all your github downloads, then download diffusers
  50. mkdir ~/github
  51. cd ~/github
  52. git clone https://github.com/ShivamShrirao/diffusers/
  53. cd diffusers
  54.  
  55. * Install required packages
  56.  
  57. pip install .
  58. cd examples/dreambooth
  59. pip install -r requirements.txt
  60. pip install torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
  61. pip install diffusers
  62. pip install deepspeed
  63.  
  64. * Configure / login
  65.  
  66. accelerate config
  67.  
  68. * Use following options
  69.  
  70. In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
  71. Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): 0
  72. Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:
  73. Do you want to use DeepSpeed? [yes/NO]: yes
  74. Do you want to specify a json file to a DeepSpeed config? [yes/NO]:
  75. What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]:
  76. Where to offload optimizer states? [none/cpu/nvme]: cpu
  77. Where to offload parameters? [none/cpu/nvme]: cpu
  78. How many gradient accumulation steps you're passing in your script? [1]:
  79. Do you want to use gradient clipping? [yes/NO]:
  80. Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]:
  81. How many GPU(s) should be used for distributed training? [1]:
  82. Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: fp16
  83.  
  84. huggingface-cli login
  85.  
  86. * And paste token
  87.  
  88. mkdir -p training
  89. mkdir -p classes
  90. mkdir -p model
  91.  
  92. *Enter this so you won't have to restart the computer
  93.  
  94. export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
  95.  
  96.  
  97. explorer.exe .
  98.  
  99. * Copy your class images(person?) to the classes folder. Generate with whatever SD gui you're using with the class i.e. person as the prompt, or let the script generate
  100. * Generate the amount you'll use in --num_class_images (200?) if you run out of vram or if you want the script to run faster
  101. * Copy your training images(you?) to the training folder
  102.  
  103.  
  104. * Copy your training file from your Windows desktop (or wherever you saved it):
  105. cp /mnt/c/Users/YOUR_USERNAME/Desktop/my_training.sh ./
  106. chmod +x ./my_training.sh
  107. ./my_training.sh
  108.  
  109. *Example of contents of my_training.sh
  110.  
  111. export MODEL_NAME="CompVis/stable-diffusion-v1-4"
  112. export INSTANCE_DIR="training"
  113. export CLASS_DIR="classes"
  114. export OUTPUT_DIR="model"
  115.  
  116. accelerate launch train_dreambooth.py \
  117. --pretrained_model_name_or_path=$MODEL_NAME \
  118. --instance_data_dir=$INSTANCE_DIR \
  119. --class_data_dir=$CLASS_DIR \
  120. --output_dir=$OUTPUT_DIR \
  121. --with_prior_preservation --prior_loss_weight=1.0 \
  122. --instance_prompt="a photo of sks person" \
  123. --class_prompt="a photo of person" \
  124. --seed=1337 \
  125. --resolution=512 \
  126. --train_batch_size=1 \
  127. --gradient_accumulation_steps=1 --gradient_checkpointing \
  128. --learning_rate=5e-6 \
  129. --lr_scheduler="constant" \
  130. --lr_warmup_steps=0 \
  131. --num_class_images=200 \
  132. --sample_batch_size=1 \
  133. --max_train_steps=1000 \
  134. --mixed_precision=fp16
  135.  
  136.  
  137. * Convert to .ckpt
  138.  
  139. python -m pip install pytorch-lightning
  140. explorer.exe .
  141.  
  142. ***Put original sd-v1-4.ckpt in the dreambooth folder - Important
  143.  
  144. *And donwload and run a different script
  145.  
  146. wget https://raw.githubusercontent.com/ratwithacompiler/diffusers_stablediff_conversion/main/convert_diffusers_to_sd.py
  147.  
  148. python ./convert_diffusers_to_sd.py ./model ./sd-v1-4.ckpt ./model.ckpt
  149.  
  150.  
  151.  
  152. * NB: If you get an error like this -
  153. 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
  154.  
  155. Then:
  156. Make sure you've visited https://huggingface.co/CompVis/stable-diffusion-v1-4 and accepted the licence!
  157. NB. If this is the very first time you've used diffusers, it's about a 6GB download
  158.  
  159. To train again next time, simply:
  160.  
  161. * Start Ubuntu, then:
  162. conda activate diffusers
  163. cd ~/github/diffusers/examples/dreambooth
  164. rm -r model
  165. ./my_training.sh
Advertisement
Comments
  • NapalmCandy
    1 year
    # text 1.35 KB | 0 0
    1. I follow all the steps, I had no errors until the last step of the line 148 and get:
    2.  
    3. (diffusers) napalm@Windows10-250GB:~/github/diffusers/examples/dreambooth$ python ./convert_diffusers_to_sd.py ./model ./sd-v1-4.ckpt ./model.ckpt
    4. [!] Not using xformers memory efficient attention.
    5. loading diff model from './model'
    6. Traceback (most recent call last):
    7. File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 765, in <module>
    8. setup()
    9. File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 761, in setup
    10. convert_diff_to_sd(args.diffusers_model, args.base_ckpt_path, args.output_ckpt_path,
    11. File "/home/napalm/github/diffusers/examples/dreambooth/./convert_diffusers_to_sd.py", line 723, in convert_diff_to_sd
    12. diff_pipe = StableDiffusionPipeline.from_pretrained(diffusers_model_path,
    13. File "/home/napalm/anaconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 479, in from_pretrained
    14. config_dict = cls.load_config(cached_folder)
    15. File "/home/napalm/anaconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 312, in load_config
    16. raise EnvironmentError(
    17. OSError: Error no file named model_index.json found in directory ./model.
    18. (diffusers) napalm@Windows10-250GB:~/github/diffusers/examples/dreambooth$
  • jimshank
    1 year
    # text 0.17 KB | 0 0
    1. I was with you until "Generate with whatever SD gui you're using with the class i.e. person as the prompt, or let the script generate"
    2.  
    3. Any examples of what that means?
    • billybonka
      1 year
      # text 0.39 KB | 0 0
      1. To better be able to differentiate between your training subject and some random object of the same class (let's assume it's "person"), you need to provide some example images that are clearly a person, but not your subject. This way you can avoid overtraining (for the most part, I think). What this line suggests is to pre-generate the files so that you don't have to generate new ones each time.
Add Comment
Please, Sign In to add comment
Advertisement