Guest User

Untitled

a guest
Nov 22nd, 2023
3,579
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.21 KB | None | 0 0
  1. Setup Instructions (Python 3.10.11, 4090, working on Windows):
  2. Go to user directory
  3. right click git bash
  4. git clone https://github.com/Stability-AI/generative-models.git
  5.  
  6. -modify streamlit_helpers.py
  7. lowvram_mode = True
  8.  
  9. move video_sampling.py file to main dir
  10. create a checkpoints folder in the main dir
  11. download the SVD weights from https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main
  12. (optional) donwload SVD-XT weights from https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/tree/main
  13.  
  14. -modify requirements/pt2.txt file
  15. remove triton==2.0.0 line and save
  16.  
  17. -modify requirements/pt13.txt file
  18. remove triton==2.0.0.post1 line and save
  19.  
  20. Open Anaconda
  21. cd to user/generative-models
  22. conda create -n genModelVideo python=3.10.11
  23. conda activate genModelVideo
  24.  
  25.  
  26. pip install https://huggingface.co/r4ziel/xformers_pre_built/resolve/main/triton-2.0.0-cp310-cp310-win_amd64.whl
  27. pip install -r requirements/pt2.txt
  28. pip install .
  29. pip install -r requirements/pt13.txt
  30.  
  31.  
  32. streamlit run video_sampling.py
  33.  
  34. click "Load Model"
  35.  
  36. upload image and there you go.
  37.  
  38. Will get a tensor error but you can ignore it. Still seems to work
  39.  
  40. *try 48 decode t frames for faster generation
  41.  
Advertisement
Add Comment
Please, Sign In to add comment