Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- = Animate Your Stable Diffusion Faces! =
- Notes:
- * Github - https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
- * Uses PyTorch so can work on Nvidia, AMD or CPU - https://pytorch.org/get-started/locally/
- * Includes Google Colab & Huggingface links - run via your web browser!
- * You'll need a "driving video" - such as recording yourself on your own webcam!
- * You'll also need a cool avatar!
- == New virtual environment ==
- conda create --name thin-plate-spline python=3.9
- conda activate thin-plate-spline
- == Download ==
- git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git
- cd Thin-Plate-Spline-Motion-Model
- mkdir checkpoints
- * Download the pre-trained models (mgif.pth.tar, etc) into your newly created checkpoints directory
- * You'll only need the vox one for faces
- == Edit ==
- Edit the requirements.txt file to remove torchvision & torch and set Pillow==9.2.0 because of nerdy reasons
- == Install requirements ===
- * Now we can install the latest PyTorch with CUDA 11 support # nerdy reasons
- pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
- pip install -r requirements.txt
- == Run ==
- * Example custom generation with best frame detection:
- CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml \
- --checkpoint checkpoints/vox.pth.tar --source_image assets/SD_Avatar.png \
- --driving_video assets/Video_good_example.mp4 --find_best_frame --result_video SD_Avatar_Video.mp4
- * Example 2:
- CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml \
- --checkpoint checkpoints/vox.pth.tar --source_image assets/SD_Avatar.png \
- --driving_video assets/Video_bad_example.mp4 --find_best_frame --result_video SD_Avatar_Video_bad.mp4
Advertisement
Comments
-
- Hi i need a help,
- not sure what I'm doing wrong but keep getting below error:
- (thin-plate-spline) PS C:\Users\krzys\Thin-Plate-Spline-Motion-Model> CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml \
- >> --checkpoint checkpoints/vox.pth.tar --source_image assets/jp2.jpg \
- >> --driving_video assets/jp2.mp4 --find_best_frame --result_video SD_Avatar_Video.mp4
- At line:2 char:4
- + --checkpoint checkpoints/vox.pth.tar --source_image assets/jp2.jpg \
- + ~
- Missing expression after unary operator '--'.
- At line:2 char:4
- + --checkpoint checkpoints/vox.pth.tar --source_image assets/jp2.jpg \
- + ~~~~~~~~~~
- Unexpected token 'checkpoint' in expression or statement.
- At line:3 char:4
- + --driving_video assets/jp2.mp4 --find_best_frame --result_video SD_A ...
- + ~
- Missing expression after unary operator '--'.
- At line:3 char:4
- + --driving_video assets/jp2.mp4 --find_best_frame --result_video SD_A ...
- + ~~~~~~~~~~~~~
- Unexpected token 'driving_video' in expression or statement.
- + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
- + FullyQualifiedErrorId : MissingExpressionAfterOperator
- (thin-plate-spline) PS C:\Users\krzys\Thin-Plate-Spline-Motion-Model>
-
- if on windows, run the command without using CUDA_VISIBLE_DEVICES=0
- it will function as needed
-
- will try it, thanks man
-
Comment was deleted
-
- using anaconda, running demo line, getting:
- 'CUDA_VISIBLE_DEVICES' is not recognized as an internal or external command,
- so, removed this from the front (set ENV in a separate line), getting:
- import lzma
- File "C:\Users\the_i\.conda\envs\thin-plate-spline\lib\lzma.py", line 27, in <module>
- from _lzma import *
- ImportError: DLL load failed while importing _lzma: The specified module could not be found.
Add Comment
Please, Sign In to add comment
Advertisement