Advertisement
ZeroCool22

v2 Tutorial - Intel-based Python (credits to MI7MHARX).

Jan 2nd, 2018
14,983
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.69 KB | None | 0 0
  1. THIS IS FOR INTEL-BASED PYTHON, BUT YOU STILL CAN TRY TO ADAPT IT USING REGULAR PYTHON OR ANACONDA
  2.  
  3. RECOMMENDED FOR WINDOWS 10, WITH NVIDIA
  4.  
  5. this tutorial is made to help the beginners and the community. if you are quietly a pro, just do it by yourself.
  6.  
  7. link will be available within 2 hour
  8.  
  9. before we start,
  10.  
  11. install
  12.  
  13. Intel-Python (w_python3_pu_2018.1.021) to C:\IntelPython3
  14. CUDA (cuda_8.0.61_win10)
  15. Visual C++ Build Tools (visualcppbuildtools_full)
  16. CMake (cmake-3.10.1-win64-x64)
  17. GIF Animator (GIFAnimator-Setup)
  18.  
  19. extract
  20.  
  21. CUDNN to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
  22.  
  23. setting the environment
  24.  
  25. create and set PYTHONPATH to C:\IntelPython3 in Environment Variables
  26.  
  27. gear up!
  28.  
  29. in cmd, type:
  30.  
  31. conda install pip
  32. pip install tensorflow-gpu dlib opencv-python keras scipy numpy h5py matplotlib tqdm scikit-image
  33. conda install -c peterjc123 pytorch cuda80
  34.  
  35. (if the pip doenst work, download the wheel using the link in notes given, pick based on your system and python, then run pip install whatever_the-name-is.whl in the directory of the wheel downloaded)
  36.  
  37. intro
  38.  
  39. get ready
  40. open code.txt in \face-swap
  41. run cmd, cd [FACE-SWAP DIRECTORY HERE]
  42. run in cmd, activate tensorflow and python train.py
  43. wait for several hours, stop the training by clicking on the faces window (not X), and then press Q.
  44. exit the cmd
  45.  
  46. get the data!
  47.  
  48. collect tons of images of target (200+)
  49.  
  50. gather the images in a new folder, and rename as target, and copy to \face-alignment-master
  51.  
  52. find the video that satisfies your imagination (POV view recommended)
  53.  
  54. convert the video to jpg by using FFMPEG or any video software
  55.  
  56. to convert, read the code in bin directory in FFMPEG folder
  57.  
  58. run cmd,
  59.  
  60. cd [BIN DIRECTORY INSIDE OF FFMPEG HERE] ffmpeg -i file.mp4 -r 1/1 $filename%d.jpg (change file.mp4 to your video name with its extensions, and $filename%d.jpg to any name eg riley%d.jpg)
  61.  
  62. copy the jpgs to a new folder, rename to source, and copy to \face-alignment-master
  63.  
  64. i'm ready!
  65.  
  66. before do anything:
  67.  
  68. pip install -r requirements.txt
  69. python setup.py install
  70.  
  71. align both target and source images, to get aligned, cropped faces:
  72.  
  73. python align_images.py target python align_images.py source
  74.  
  75. open target and source folder, take a look at the aligned folder in both folder
  76. remove unwanted images. rename it to targetA and sourceA. copy aligned folder in both target and source to \face-swap\data
  77.  
  78. rename cage and trump to cageA and trumpA, and rename targetA and sourceA to cage and trump
  79.  
  80. after done, copy align_images.py, merge_faces.py, umeyama.py, source folder to \face-swap
  81.  
  82. lets train!
  83.  
  84. run the same code for training, activate tensorflow and python train.py
  85. after it done, new output folder can be access
  86. have a look of the training data. if satisfied, proceed
  87. run cmd python merge_faces_masked.py source
  88. a new aligned folder can be access in source folder
  89. take a look at the merged faces
  90.  
  91. but, no gif?
  92.  
  93. from the merged folder,
  94.  
  95. go to GIFMaker , to convert jpg to gif. (it easier!)
  96. you can also use the install GIF Animator to make GIF
  97. DONE
  98.  
  99. kinda bored. need some audio!
  100.  
  101. use FFMPEG, to convert gif to video
  102.  
  103. ffmpeg -i try.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" try.mp4
  104.  
  105. use any video software
  106.  
  107. *add the source video (video + audio) *add the final deepfake video *align it correctly to the time/frames of the source video *delete the source video *render and voilĂ , now you have a new fake video with sound!
  108.  
  109. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  110.  
  111. many thanks to deepfakes and community for this project
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement