Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- {
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
- "colab": {
- "name": "DFL_orig.ipynb",
- "version": "0.3.2",
- "provenance": [],
- "collapsed_sections": [
- "JuVn21kt40Gw",
- "6jHv35sm-Qiy",
- "tUNVcbujhm00",
- "WTuyUxgdLA13",
- "avAcSL_uvtq_",
- "f7GNQ7kZx7Ha"
- ],
- "toc_visible": true
- },
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
- },
- "accelerator": "GPU"
- },
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "0cKdTCuv4tXh",
- "colab_type": "text"
- },
- "source": [
- "# Welcome to DFL-Colab!\n",
- "\n",
- "This is an adapted version of the DFL for Google Colab.\n",
- "\n",
- "Version 2.5\n",
- "\n",
- "# Overview\n",
- "* Extractor works in full functionality.\n",
- "* Training can work without preview.\n",
- "* Converter works in full functionality.\n",
- "* You can import/export workspace with your Google Drive.\n",
- "* Import/export and another manipulations with workspace you can do in \"Manage workspace\" block\n",
- "* Google Colab machine active for 12 hours. DFL-Colab makes a backup of your workspace in training mode, after 11 hours from the start of the session.\n",
- "* Google does not like long-term heavy calculations. Therefore, for training more than two sessions in a row, use two Google accounts. It is recommended to split your training over 2 accounts, but you can use one Google Drive account to store your workspace.\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "JuVn21kt40Gw",
- "colab_type": "text"
- },
- "source": [
- "# Clone Github repository and install requirements\n",
- "\n",
- "* Clone Github repository or pull updates\n",
- "* Requirements install is automatically"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "JG-f2WqT4fLK",
- "colab_type": "code",
- "cellView": "form",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 1000
- },
- "outputId": "dbc2cdd6-e6a0-40f0-f0d6-fa49bba7017e"
- },
- "source": [
- "#@title Clone or pull DeepFaceLab from Github\n",
- "\n",
- "Mode = \"clone\" #@param [\"clone\", \"pull\"]\n",
- "\n",
- "from pathlib import Path\n",
- "if (Mode == \"clone\"):\n",
- " !git clone https://github.com/iperov/DeepFaceLab.git\n",
- "else:\n",
- " %cd /content/DeepFaceLab\n",
- " !git pull\n",
- "\n",
- "!pip install -r /content/DeepFaceLab/requirements-colab.txt\n",
- "!pip install --upgrade scikit-image\n",
- "\n",
- "if not Path(\"/content/workspace\").exists():\n",
- " !wget -q --no-check-certificate -r 'https://docs.google.com/uc?export=download&id=1hTH2h6l_4kKrczA8EkN6GyuXx4lzmCnK' -O pretrain_CelebA.zip\n",
- " !mkdir /content/pretrain\n",
- " !unzip -q /content/pretrain_CelebA.zip -d /content/pretrain/\n",
- " !rm /content/pretrain_CelebA.zip\n",
- "\n",
- "print(\"Done!\")"
- ],
- "execution_count": 1,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Cloning into 'DeepFaceLab'...\n",
- "remote: Enumerating objects: 26, done.\u001b[K\n",
- "remote: Counting objects: 100% (26/26), done.\u001b[K\n",
- "remote: Compressing objects: 100% (22/22), done.\u001b[K\n",
- "remote: Total 3342 (delta 8), reused 12 (delta 4), pack-reused 3316\u001b[K\n",
- "Receiving objects: 100% (3342/3342), 301.86 MiB | 11.44 MiB/s, done.\n",
- "Resolving deltas: 100% (2149/2149), done.\n",
- "Checking out files: 100% (122/122), done.\n",
- "Collecting git+https://www.github.com/keras-team/keras-contrib.git (from -r /content/DeepFaceLab/requirements-colab.txt (line 10))\n",
- " Cloning https://www.github.com/keras-team/keras-contrib.git to /tmp/pip-req-build-5gvr326i\n",
- " Running command git clone -q https://www.github.com/keras-team/keras-contrib.git /tmp/pip-req-build-5gvr326i\n",
- "Collecting numpy==1.16.3 (from -r /content/DeepFaceLab/requirements-colab.txt (line 1))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/c1/e2/4db8df8f6cddc98e7d7c537245ef2f4e41a1ed17bf0c3177ab3cc6beac7f/numpy-1.16.3-cp36-cp36m-manylinux1_x86_64.whl (17.3MB)\n",
- "\u001b[K |████████████████████████████████| 17.3MB 2.8MB/s \n",
- "\u001b[?25hCollecting h5py==2.9.0 (from -r /content/DeepFaceLab/requirements-colab.txt (line 2))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/30/99/d7d4fbf2d02bb30fb76179911a250074b55b852d34e98dd452a9f394ac06/h5py-2.9.0-cp36-cp36m-manylinux1_x86_64.whl (2.8MB)\n",
- "\u001b[K |████████████████████████████████| 2.8MB 27.6MB/s \n",
- "\u001b[?25hRequirement already satisfied: Keras==2.2.4 in /usr/local/lib/python3.6/dist-packages (from -r /content/DeepFaceLab/requirements-colab.txt (line 3)) (2.2.4)\n",
- "Collecting opencv-python==4.0.0.21 (from -r /content/DeepFaceLab/requirements-colab.txt (line 4))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/37/49/874d119948a5a084a7ebe98308214098ef3471d76ab74200f9800efeef15/opencv_python-4.0.0.21-cp36-cp36m-manylinux1_x86_64.whl (25.4MB)\n",
- "\u001b[K |████████████████████████████████| 25.4MB 1.9MB/s \n",
- "\u001b[?25hCollecting tensorflow-gpu==1.13.1 (from -r /content/DeepFaceLab/requirements-colab.txt (line 5))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/7b/b1/0ad4ae02e17ddd62109cd54c291e311c4b5fd09b4d0678d3d6ce4159b0f0/tensorflow_gpu-1.13.1-cp36-cp36m-manylinux1_x86_64.whl (345.2MB)\n",
- "\u001b[K |████████████████████████████████| 345.2MB 65kB/s \n",
- "\u001b[?25hCollecting plaidml-keras==0.5.0 (from -r /content/DeepFaceLab/requirements-colab.txt (line 6))\n",
- " Downloading https://files.pythonhosted.org/packages/17/34/4102261e3d8867c31bae9f4def5d7e700fc25fff232fa1780040e8ed79b0/plaidml_keras-0.5.0-py2.py3-none-any.whl\n",
- "Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from -r /content/DeepFaceLab/requirements-colab.txt (line 7)) (0.15.0)\n",
- "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from -r /content/DeepFaceLab/requirements-colab.txt (line 8)) (4.28.1)\n",
- "Collecting ffmpeg-python==0.1.17 (from -r /content/DeepFaceLab/requirements-colab.txt (line 9))\n",
- " Downloading https://files.pythonhosted.org/packages/3d/10/330cbc8e63d072d40413f4d470444a6a1e8c8c6a80b2a4ac302d1252ca1b/ffmpeg_python-0.1.17-py3-none-any.whl\n",
- "Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from h5py==2.9.0->-r /content/DeepFaceLab/requirements-colab.txt (line 2)) (1.12.0)\n",
- "Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from Keras==2.2.4->-r /content/DeepFaceLab/requirements-colab.txt (line 3)) (1.1.0)\n",
- "Requirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from Keras==2.2.4->-r /content/DeepFaceLab/requirements-colab.txt (line 3)) (1.3.1)\n",
- "Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from Keras==2.2.4->-r /content/DeepFaceLab/requirements-colab.txt (line 3)) (3.13)\n",
- "Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from Keras==2.2.4->-r /content/DeepFaceLab/requirements-colab.txt (line 3)) (1.0.8)\n",
- "Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (0.33.4)\n",
- "Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (0.8.0)\n",
- "Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (1.15.0)\n",
- "Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (3.7.1)\n",
- "Collecting tensorflow-estimator<1.14.0rc0,>=1.13.0 (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/bb/48/13f49fc3fa0fdf916aa1419013bb8f2ad09674c275b4046d5ee669a46873/tensorflow_estimator-1.13.0-py2.py3-none-any.whl (367kB)\n",
- "\u001b[K |████████████████████████████████| 368kB 39.6MB/s \n",
- "\u001b[?25hRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (1.1.0)\n",
- "Collecting tensorboard<1.14.0,>=1.13.0 (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/0f/39/bdd75b08a6fba41f098b6cb091b9e8c7a80e1b4d679a581a0ccd17b10373/tensorboard-1.13.1-py3-none-any.whl (3.2MB)\n",
- "\u001b[K |████████████████████████████████| 3.2MB 25.5MB/s \n",
- "\u001b[?25hRequirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (0.7.1)\n",
- "Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (0.2.2)\n",
- "Collecting plaidml (from plaidml-keras==0.5.0->-r /content/DeepFaceLab/requirements-colab.txt (line 6))\n",
- "\u001b[?25l Downloading https://files.pythonhosted.org/packages/05/48/76071904028f16b8fcf86e021eaa297e69fb7f816f1d95162292e85da989/plaidml-0.6.4-py2.py3-none-manylinux1_x86_64.whl (32.1MB)\n",
- "\u001b[K |████████████████████████████████| 32.1MB 1.5MB/s \n",
- "\u001b[?25hRequirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (2.4.1)\n",
- "Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (2.3)\n",
- "Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (4.3.0)\n",
- "Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (1.0.3)\n",
- "Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (3.0.3)\n",
- "Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from ffmpeg-python==0.1.17->-r /content/DeepFaceLab/requirements-colab.txt (line 9)) (0.16.0)\n",
- "Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (41.0.1)\n",
- "Collecting mock>=2.0.0 (from tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5))\n",
- " Downloading https://files.pythonhosted.org/packages/05/d2/f94e68be6b17f46d2c353564da56e6fb89ef09faeeff3313a046cb810ca9/mock-3.0.5-py2.py3-none-any.whl\n",
- "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (0.15.5)\n",
- "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1->-r /content/DeepFaceLab/requirements-colab.txt (line 5)) (3.1.1)\n",
- "Requirement already satisfied: cffi in /usr/local/lib/python3.6/dist-packages (from plaidml->plaidml-keras==0.5.0->-r /content/DeepFaceLab/requirements-colab.txt (line 6)) (1.12.3)\n",
- "Collecting enum34>=1.1.6 (from plaidml->plaidml-keras==0.5.0->-r /content/DeepFaceLab/requirements-colab.txt (line 6))\n",
- " Downloading https://files.pythonhosted.org/packages/af/42/cb9355df32c69b553e72a2e28daee25d1611d2c0d9c272aa1d34204205b2/enum34-1.1.6-py3-none-any.whl\n",
- "Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (4.4.0)\n",
- "Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (0.46)\n",
- "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (0.10.0)\n",
- "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (1.1.0)\n",
- "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (2.5.3)\n",
- "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r /content/DeepFaceLab/requirements-colab.txt (line 7)) (2.4.2)\n",
- "Requirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi->plaidml->plaidml-keras==0.5.0->-r /content/DeepFaceLab/requirements-colab.txt (line 6)) (2.19)\n",
- "Building wheels for collected packages: keras-contrib\n",
- " Building wheel for keras-contrib (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
- " Created wheel for keras-contrib: filename=keras_contrib-2.0.8-cp36-none-any.whl size=101066 sha256=afa2fe4faf1b8ceca6cd05a73e72662ca051101d9fbf76dd37c446c84bb0e1d6\n",
- " Stored in directory: /tmp/pip-ephem-wheel-cache-j5bxmosh/wheels/11/27/c8/4ed56de7b55f4f61244e2dc6ef3cdbaff2692527a2ce6502ba\n",
- "Successfully built keras-contrib\n",
- "\u001b[31mERROR: tensorflow 1.14.0 has requirement tensorboard<1.15.0,>=1.14.0, but you'll have tensorboard 1.13.1 which is incompatible.\u001b[0m\n",
- "\u001b[31mERROR: tensorflow 1.14.0 has requirement tensorflow-estimator<1.15.0rc0,>=1.14.0rc0, but you'll have tensorflow-estimator 1.13.0 which is incompatible.\u001b[0m\n",
- "\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\n",
- "\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\n",
- "Installing collected packages: numpy, h5py, opencv-python, mock, tensorflow-estimator, tensorboard, tensorflow-gpu, enum34, plaidml, plaidml-keras, ffmpeg-python, keras-contrib\n",
- " Found existing installation: numpy 1.16.4\n",
- " Uninstalling numpy-1.16.4:\n",
- " Successfully uninstalled numpy-1.16.4\n",
- " Found existing installation: h5py 2.8.0\n",
- " Uninstalling h5py-2.8.0:\n",
- " Successfully uninstalled h5py-2.8.0\n",
- " Found existing installation: opencv-python 3.4.5.20\n",
- " Uninstalling opencv-python-3.4.5.20:\n",
- " Successfully uninstalled opencv-python-3.4.5.20\n",
- " Found existing installation: tensorflow-estimator 1.14.0\n",
- " Uninstalling tensorflow-estimator-1.14.0:\n",
- " Successfully uninstalled tensorflow-estimator-1.14.0\n",
- " Found existing installation: tensorboard 1.14.0\n",
- " Uninstalling tensorboard-1.14.0:\n",
- " Successfully uninstalled tensorboard-1.14.0\n",
- "Successfully installed enum34-1.1.6 ffmpeg-python-0.1.17 h5py-2.9.0 keras-contrib-2.0.8 mock-3.0.5 numpy-1.16.3 opencv-python-4.0.0.21 plaidml-0.6.4 plaidml-keras-0.5.0 tensorboard-1.13.1 tensorflow-estimator-1.13.0 tensorflow-gpu-1.13.1\n"
- ],
- "name": "stdout"
- },
- {
- "output_type": "display_data",
- "data": {
- "application/vnd.colab-display-data+json": {
- "pip_warning": {
- "packages": [
- "enum",
- "numpy"
- ]
- }
- }
- },
- "metadata": {
- "tags": []
- }
- },
- {
- "output_type": "stream",
- "text": [
- "Requirement already up-to-date: scikit-image in /usr/local/lib/python3.6/dist-packages (0.15.0)\n",
- "Requirement already satisfied, skipping upgrade: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (4.3.0)\n",
- "Requirement already satisfied, skipping upgrade: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (3.0.3)\n",
- "Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (1.0.3)\n",
- "Requirement already satisfied, skipping upgrade: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (1.3.1)\n",
- "Requirement already satisfied, skipping upgrade: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (2.3)\n",
- "Requirement already satisfied, skipping upgrade: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image) (2.4.1)\n",
- "Requirement already satisfied, skipping upgrade: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image) (0.46)\n",
- "Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.4.2)\n",
- "Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.1.0)\n",
- "Requirement already satisfied, skipping upgrade: numpy>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.16.3)\n",
- "Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.5.3)\n",
- "Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (0.10.0)\n",
- "Requirement already satisfied, skipping upgrade: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image) (4.4.0)\n",
- "Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image) (41.0.1)\n",
- "Requirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.12.0)\n",
- "warning [/content/pretrain_CelebA.zip]: 457793 extra bytes at beginning or within zipfile\n",
- " (attempting to process anyway)\n",
- "Done!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "hqwOlJG4MdLC",
- "colab_type": "text"
- },
- "source": [
- "#Manage workspace\n",
- "\n",
- "\n",
- "\n",
- "* You can import/export workspace or individual data, like model files with Google Drive\n",
- "* Also, you can use HFS (HTTP Fileserver) for directly import/export you workspace from your computer\n",
- "* You can clear all workspace or delete part of it\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "z4w_sUzgOQmL",
- "colab_type": "code",
- "cellView": "both",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 127
- },
- "outputId": "2161ac32-b0a2-4121-a6d4-8ef831475a24"
- },
- "source": [
- "#@title Import from Drive\n",
- "\n",
- "Mode = \"models\" #@param [\"workspace\", \"data_src\", \"data_dst\", \"data_src aligned\", \"data_dst aligned\", \"models\"]\n",
- "Archive_name = \"msdf.zip\" #@param {type:\"string\"}\n",
- "\n",
- "#Mount Google Drive as folder\n",
- "from google.colab import drive\n",
- "drive.mount('/content/drive', force_remount=True)\n",
- "\n",
- "def zip_and_copy(path, mode):\n",
- " unzip_cmd=\" -q \"+Archive_name\n",
- " \n",
- " %cd $path\n",
- " copy_cmd = \"/content/drive/My\\ Drive/\"+Archive_name+\" \"+path\n",
- " !cp $copy_cmd\n",
- " !unzip $unzip_cmd \n",
- " !rm $Archive_name\n",
- "\n",
- "if Mode == \"workspace\":\n",
- " zip_and_copy(\"/content\", \"workspace\")\n",
- "elif Mode == \"data_src\":\n",
- " zip_and_copy(\"/content/workspace\", \"data_src\")\n",
- "elif Mode == \"data_dst\":\n",
- " zip_and_copy(\"/content/workspace\", \"data_dst\")\n",
- "elif Mode == \"data_src aligned\":\n",
- " zip_and_copy(\"/content/workspace/data_src\", \"aligned\")\n",
- "elif Mode == \"data_dst aligned\":\n",
- " zip_and_copy(\"/content/workspace/data_dst\", \"aligned\")\n",
- "elif Mode == \"models\":\n",
- " zip_and_copy(\"/content/workspace\", \"model\")\n",
- " \n",
- "print(\"Done!\")\n",
- "\n"
- ],
- "execution_count": 4,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Mounted at /content/drive\n",
- "/content/workspace\n",
- "cp: cannot stat '/content/drive/My Drive/msdf.zip': No such file or directory\n",
- "unzip: cannot find or open msdf.zip, msdf.zip.zip or msdf.zip.ZIP.\n",
- "rm: cannot remove 'msdf.zip': No such file or directory\n",
- "Done!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0Y3WfuwoNXqC",
- "colab_type": "code",
- "cellView": "form",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 72
- },
- "outputId": "1b873a84-b899-4fa5-d107-348159c0134c"
- },
- "source": [
- "#@title Export to Drive { form-width: \"30%\" }\n",
- "Mode = \"models\" #@param [\"workspace\", \"data_src\", \"data_dst\", \"data_src aligned\", \"data_dst aligned\", \"merged\", \"models\"]\n",
- "Archive_name = \"msdf.zip\" #@param {type:\"string\"}\n",
- "\n",
- "#Mount Google Drive as folder\n",
- "from google.colab import drive\n",
- "drive.mount('/content/drive', force_remount=True)\n",
- "\n",
- "def zip_and_copy(path, mode):\n",
- " zip_cmd=\"-r -q \"+Archive_name+\" \"\n",
- " \n",
- " %cd $path\n",
- " zip_cmd+=mode\n",
- " !zip $zip_cmd\n",
- " copy_cmd = \" \"+Archive_name+\" /content/drive/My\\ Drive/\"\n",
- " !cp $copy_cmd\n",
- " !rm $Archive_name\n",
- "\n",
- "if Mode == \"workspace\":\n",
- " zip_and_copy(\"/content\", \"workspace\")\n",
- "elif Mode == \"data_src\":\n",
- " zip_and_copy(\"/content/workspace\", \"data_src\")\n",
- "elif Mode == \"data_dst\":\n",
- " zip_and_copy(\"/content/workspace\", \"data_dst\")\n",
- "elif Mode == \"data_src aligned\":\n",
- " zip_and_copy(\"/content/workspace/data_src\", \"aligned\")\n",
- "elif Mode == \"data_dst aligned\":\n",
- " zip_and_copy(\"/content/workspace/data_dst\", \"aligned\")\n",
- "elif Mode == \"merged\":\n",
- " zip_and_copy(\"/content/workspace/data_dst\", \"merged\")\n",
- "elif Mode == \"models\":\n",
- " zip_and_copy(\"/content/workspace\", \"model\")\n",
- " \n",
- "print(\"Done!\")\n"
- ],
- "execution_count": 6,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Mounted at /content/drive\n",
- "/content/workspace\n",
- "Done!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0hIvJtxwTGcb",
- "colab_type": "code",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 54
- },
- "outputId": "a085ba81-7137-402c-a284-7281d1a81d9e"
- },
- "source": [
- "#@title Import from URL{ form-width: \"30%\", display-mode: \"form\" }\n",
- "URL = \"http://195.201.97.169:60090/msdf.zip\" #@param {type:\"string\"}\n",
- "Mode = \"unzip to content\" #@param [\"unzip to content\", \"unzip to content/workspace\", \"unzip to content/workspace/data_src\", \"unzip to content/workspace/data_src/aligned\", \"unzip to content/workspace/data_dst\", \"unzip to content/workspace/data_dst/aligned\", \"unzip to content/workspace/model\", \"download to content/workspace\"]\n",
- "\n",
- "import urllib\n",
- "from pathlib import Path\n",
- "\n",
- "def unzip(zip_path, dest_path):\n",
- "\n",
- " \n",
- " unzip_cmd = \" unzip -q \" + zip_path + \" -d \"+dest_path\n",
- " !$unzip_cmd \n",
- " rm_cmd = \"rm \"+dest_path + url_path.name\n",
- " !$rm_cmd\n",
- " print(\"Unziped!\")\n",
- " \n",
- "\n",
- "if Mode == \"unzip to content\":\n",
- " dest_path = \"/content/\"\n",
- "elif Mode == \"unzip to content/workspace\":\n",
- " dest_path = \"/content/workspace/\"\n",
- "elif Mode == \"unzip to content/workspace/data_src\":\n",
- " dest_path = \"/content/workspace/data_src/\"\n",
- "elif Mode == \"unzip to content/workspace/data_src/aligned\":\n",
- " dest_path = \"/content/workspace/data_src/aligned/\"\n",
- "elif Mode == \"unzip to content/workspace/data_dst\":\n",
- " dest_path = \"/content/workspace/data_dst/\"\n",
- "elif Mode == \"unzip to content/workspace/data_dst/aligned\":\n",
- " dest_path = \"/content/workspace/data_dst/aligned/\"\n",
- "elif Mode == \"unzip to content/workspace/model\":\n",
- " dest_path = \"/content/workspace/model/\"\n",
- "elif Mode == \"download to content/workspace\":\n",
- " dest_path = \"/content/workspace/\"\n",
- "\n",
- "if not Path(\"/content/workspace\").exists():\n",
- " cmd = \"mkdir /content/workspace; mkdir /content/workspace/data_src; mkdir /content/workspace/data_src/aligned; mkdir /content/workspace/data_dst; mkdir /content/workspace/data_dst/aligned; mkdir /content/workspace/model\"\n",
- " !$cmd\n",
- "\n",
- "url_path = Path(URL)\n",
- "urllib.request.urlretrieve ( URL, dest_path + url_path.name )\n",
- "\n",
- "if (url_path.suffix == \".zip\") and (Mode!=\"download to content/workspace\"):\n",
- " unzip(dest_path + url_path.name, dest_path)\n",
- "\n",
- " \n",
- "print(\"Done!\")"
- ],
- "execution_count": 2,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Unziped!\n",
- "Done!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "7V1sc7rxNKLO",
- "colab_type": "code",
- "cellView": "both",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 127
- },
- "outputId": "da761fe6-f5d1-4ddf-95b1-f5714b9c0087"
- },
- "source": [
- "#@title Export to URL\n",
- "URL = \"http://195.201.97.169:60090/up.php\" #@param {type:\"string\"}\n",
- "Mode = \"upload model\" #@param [\"upload workspace\", \"upload data_src\", \"upload data_dst\", \"upload data_src aligned\", \"upload data_dst aligned\", \"upload merged\", \"upload model\"]\n",
- "\n",
- "cmd_zip = \"zip -r -q \"\n",
- "\n",
- "def run_cmd(zip_path, curl_url):\n",
- " cmd_zip = \"zip -r -q \"+zip_path\n",
- " cmd_curl = \"curl -F \"+curl_url+\" -D out.txt \"\n",
- " !$cmd_zip\n",
- " print(cmd_curl)\n",
- " !$cmd_curl\n",
- "\n",
- "\n",
- "if Mode == \"upload workspace\":\n",
- " %cd \"/content\"\n",
- " run_cmd(\"workspace.zip workspace/\",\"'data=@/content/workspace.zip' \"+URL)\n",
- "elif Mode == \"upload data_src\":\n",
- " %cd \"/content/workspace\"\n",
- " print(\"data_src.zip data_src/\", \"'data=@/content/workspace/data_src.zip' \"+URL)\n",
- " run_cmd(\"data_src.zip data_src/\", \"'data=@/content/workspace/data_src.zip' \"+URL)\n",
- "elif Mode == \"upload data_dst\":\n",
- " %cd \"/content/workspace\"\n",
- " run_cmd(\"data_dst.zip data_dst/\", \"'data=@/content/workspace/data_dst.zip' \"+URL)\n",
- "elif Mode == \"upload data_src aligned\":\n",
- " %cd \"/content/workspace\"\n",
- " run_cmd(\"data_src_aligned.zip data_src/aligned\", \"'data=@/content/workspace/data_src_aligned.zip' \"+URL )\n",
- "elif Mode == \"upload data_dst aligned\":\n",
- " %cd \"/content/workspace\"\n",
- " run_cmd(\"data_dst_aligned.zip data_dst/aligned/\", \"'data=@/content/workspace/data_dst_aligned.zip' \"+URL)\n",
- "elif Mode == \"upload merged\":\n",
- " %cd \"/content/workspace/data_dst\"\n",
- " run_cmd(\"merged.zip merged/\",\"'data=@/content/workspace/data_dst/merged.zip' \"+URL )\n",
- "elif Mode == \"upload model\":\n",
- " %cd \"/content/workspace\"\n",
- " run_cmd(\"\"+Archive_name+\" model/\", \"'data=@/content/workspace/\"+Archive_name+\"' \"+URL)\n",
- " \n",
- " \n",
- "!rm *.zip\n",
- "\n",
- "%cd \"/content\"\n",
- "print(\"Done!\")"
- ],
- "execution_count": 18,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "/content/workspace\n",
- "curl -F 'data=@/content/workspace/msdf.zip' http://195.201.97.169:60090/up.php -D out.txt \n",
- "File is valid, and was successfully uploaded.\n",
- "\n",
- "OK/content\n",
- "Done!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Ta6ue_UGMkki",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Delete and recreate\n",
- "Mode = \"Delete and recreate workspace\" #@param [\"Delete and recreate workspace\", \"Delete models\", \"Delete data_src\", \"Delete data_src aligned\", \"Delete data_src video\", \"Delete data_dst\", \"Delete data_dst aligned\", \"Delete merged frames\"]\n",
- "\n",
- "%cd \"/content\" \n",
- "\n",
- "if Mode == \"Delete and recreate workspace\":\n",
- " cmd = \"rm -r /content/workspace ; mkdir /content/workspace; mkdir /content/workspace/data_src; mkdir /content/workspace/data_src/aligned; mkdir /content/workspace/data_dst; mkdir /content/workspace/data_dst/aligned; mkdir /content/workspace/model\" \n",
- "elif Mode == \"Delete models\":\n",
- " cmd = \"rm -r /content/workspace/model/*\"\n",
- "elif Mode == \"Delete data_src\":\n",
- " cmd = \"rm /content/workspace/data_src/*.png || rm -r /content/workspace/data_src/*.jpg\"\n",
- "elif Mode == \"Delete data_src aligned\":\n",
- " cmd = \"rm -r /content/workspace/data_src/aligned/*\"\n",
- "elif Mode == \"Delete data_src video\":\n",
- " cmd = \"rm -r /content/workspace/data_src.*\"\n",
- "elif Mode == \"Delete data_dst\":\n",
- " cmd = \"rm /content/workspace/data_dst/*.png || rm /content/workspace/data_dst/*.jpg\"\n",
- "elif Mode == \"Delete data_dst aligned\":\n",
- " cmd = \"rm -r /content/workspace/data_dst/aligned/*\"\n",
- "elif Mode == \"Delete merged frames\":\n",
- " cmd = \"rm -r /content/workspace/data_dst/merged\"\n",
- " \n",
- "!$cmd\n",
- "print(\"Done!\")"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "tUNVcbujhm00",
- "colab_type": "text"
- },
- "source": [
- "# Extract and sorting\n",
- "* Extract frames for SRC or DST video.\n",
- "* Denoise SRC or DST video. \"Factor\" param set intesity of denoising\n",
- "* Detect and align faces with one of detectors. (S3FD - recommended). If you need, you can get frames with debug landmarks.\n",
- "* Export workspace to Google Drive after extract and sort it manually (Last block of Notebook)\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "qwJEbz5Nhot0",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Extract frames\n",
- "Video = \"data_src\" #@param [\"data_src\", \"data_dst\"]\n",
- "\n",
- "%cd \"/content\"\n",
- "\n",
- "cmd = \"DeepFaceLab/main.py videoed extract-video\"\n",
- "\n",
- "if Video == \"data_dst\":\n",
- " cmd+= \" --input-file workspace/data_dst.* --output-dir workspace/data_dst/\"\n",
- "else:\n",
- " cmd+= \" --input-file workspace/data_src.* --output-dir workspace/data_src/\"\n",
- " \n",
- "!python $cmd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "bFmPo0s2lTil",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Denoise frames\n",
- "Data = \"data_src\" #@param [\"data_src\", \"data_dst\"]\n",
- "Factor = 1 #@param {type:\"slider\", min:1, max:20, step:1}\n",
- "\n",
- "cmd = \"DeepFaceLab/main.py videoed denoise-image-sequence --input-dir workspace/\"+Data+\" --factor \"+str(Factor)\n",
- "\n",
- "%cd \"/content\"\n",
- "!python $cmd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nmq0Sj2bmq7d",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Detect faces\n",
- "Data = \"data_src\" #@param [\"data_src\", \"data_dst\"]\n",
- "Detector = \"S3FD\" #@param [\"S3FD\", \"MT\"]\n",
- "Debug = False #@param {type:\"boolean\"}\n",
- "\n",
- "detect_type = \"s3fd\"\n",
- "if Detector == \"S3FD\":\n",
- " detect_type = \"s3fd\"\n",
- "elif Detector == \"MT\":\n",
- " detect_type = \"mt\"\n",
- "\n",
- "folder = \"workspace/\"+Data\n",
- "folder_align = folder+\"/aligned\"\n",
- "debug_folder = folder_align+\"/debug\"\n",
- "\n",
- "cmd = \"DeepFaceLab/main.py extract --input-dir \"+folder+\" --output-dir \"+folder_align\n",
- "\n",
- "if Debug:\n",
- " cmd+= \" --debug-dir \"+debug_folder\n",
- "\n",
- "cmd+=\" --detector \"+detect_type\n",
- " \n",
- "%cd \"/content\"\n",
- "!python $cmd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "TRNxUFE6p6Eu",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Sort aligned\n",
- "Data = \"data_src\" #@param [\"data_src\", \"data_dst\"]\n",
- "sort_type = \"hist\" #@param [\"hist\", \"hist-dissim\", \"face-yaw\", \"face-pitch\", \"blur\", \"final\"]\n",
- "\n",
- "cmd = \"DeepFaceLab/main.py sort --input-dir workspace/\"+Data+\"/aligned --by \"+sort_type\n",
- "\n",
- "%cd \"/content\"\n",
- "!python $cmd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "WTuyUxgdLA13",
- "colab_type": "text"
- },
- "source": [
- "# Train model\n",
- "\n",
- "* Choose your model type, but SAE is recommend for everyone\n",
- "* Set model options on output field\n",
- "* You can see preview manually, if go to model folder in filemanager and double click on preview.jpg file\n",
- "* Your workspace will be archived and upload to mounted Drive after 11 hours from start session\n",
- "* If you select \"Backup_every_hour\" option, your workspace will be backed up every hour.\n",
- "* Also, you can export your workspace manually in \"Manage workspace\" block"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Z0Kya-PJLDhv",
- "colab_type": "code",
- "cellView": "form",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 1000
- },
- "outputId": "54275d17-2ac9-43d7-ffc9-413fffdeb379"
- },
- "source": [
- "#@title Training\n",
- "Model = \"SAE\" #@param [\"SAE\", \"H128\", \"LIAEF128\", \"DF\", \"DEV_FANSEG\", \"RecycleGAN\"]\n",
- "Backup_every_hour = False #@param {type:\"boolean\"}\n",
- "\n",
- "%cd \"/content\"\n",
- "\n",
- "#Mount Google Drive as folder\n",
- "from google.colab import drive\n",
- "drive.mount('/content/drive')\n",
- "\n",
- "import psutil, os, time\n",
- "\n",
- "p = psutil.Process(os.getpid())\n",
- "uptime = time.time() - p.create_time()\n",
- "\n",
- "if (Backup_every_hour):\n",
- " if not os.path.exists('workspace.zip'):\n",
- " print(\"Creating workspace archive ...\")\n",
- " !zip -r -q workspace.zip workspace\n",
- " print(\"Archive created!\")\n",
- " else:\n",
- " print(\"Archive exist!\")\n",
- "\n",
- "if (Backup_every_hour):\n",
- " print(\"Time to end session: \"+str(round((43200-uptime)/3600))+\" hours\")\n",
- " backup_time = str(3600)\n",
- " backup_cmd = \" --execute-program -\"+backup_time+\" \\\"import os; os.system('zip -r -q workspace.zip workspace/model'); os.system('cp /content/workspace.zip /content/drive/My\\ Drive/'); print(' Backuped!') \\\"\" \n",
- "elif (round(39600-uptime) > 0):\n",
- " print(\"Time to backup: \"+str(round((39600-uptime)/3600))+\" hours\")\n",
- " backup_time = str(round(39600-uptime))\n",
- " backup_cmd = \" --execute-program \"+backup_time+\" \\\"import os; os.system('zip -r -q workspace.zip workspace'); os.system('cp /content/workspace.zip /content/drive/My\\ Drive/'); print(' Backuped!') \\\"\" \n",
- "else:\n",
- " print(\"Session expires in less than an hour.\")\n",
- " backup_cmd = \"\"\n",
- " \n",
- "cmd = \"DeepFaceLab/main.py train --training-data-src-dir workspace/data_src/aligned --training-data-dst-dir workspace/data_dst/aligned --pretraining-data-dir pretrain/aligned --model-dir workspace/model --model \"+Model\n",
- " \n",
- "if (backup_cmd != \"\"):\n",
- " train_cmd = (cmd+backup_cmd)\n",
- "else:\n",
- " train_cmd = (cmd)\n",
- "\n",
- "!python $train_cmd"
- ],
- "execution_count": 17,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "/content\n",
- "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n",
- "Time to backup: 3 hours\n",
- "Running trainer.\n",
- "\n",
- "Loading model...\n",
- "Press enter in 2 seconds to override model settings./usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\n",
- " len(cache))\n",
- "Using TensorFlow backend.\n",
- "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
- "Instructions for updating:\n",
- "Colocations handled automatically by placer.\n",
- "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
- "Instructions for updating:\n",
- "Use tf.cast instead.\n",
- "Loading: 100% 2801/2801 [00:04<00:00, 601.76it/s]\n",
- "Loading: 100% 2140/2140 [00:02<00:00, 725.26it/s]\n",
- "========== Model Summary ==========\n",
- "== ==\n",
- "== Model name: SAE ==\n",
- "== ==\n",
- "== Current iteration: 4231 ==\n",
- "== ==\n",
- "==-------- Model Options --------==\n",
- "== ==\n",
- "== batch_size: 3 ==\n",
- "== sort_by_yaw: False ==\n",
- "== random_flip: True ==\n",
- "== resolution: 224 ==\n",
- "== face_type: f ==\n",
- "== learn_mask: True ==\n",
- "== optimizer_mode: 1 ==\n",
- "== archi: df ==\n",
- "== ae_dims: 512 ==\n",
- "== e_ch_dims: 42 ==\n",
- "== d_ch_dims: 21 ==\n",
- "== multiscale_decoder: False ==\n",
- "== ca_weights: False ==\n",
- "== pixel_loss: False ==\n",
- "== face_style_power: 0.0 ==\n",
- "== bg_style_power: 0.0 ==\n",
- "== apply_random_ct: False ==\n",
- "== clipgrad: True ==\n",
- "== ==\n",
- "==--------- Running On ----------==\n",
- "== ==\n",
- "== Device index: 0 ==\n",
- "== Name: Tesla K80 ==\n",
- "== VRAM: 11.00GB ==\n",
- "== ==\n",
- "===================================\n",
- "Starting. Press \"Enter\" to stop training and save model.\n",
- "[16:47:50][#004447][3855ms][0.5671][0.3423]\n",
- "[17:03:03][#004669][3827ms][0.5603][0.3440]\n",
- "[17:18:17][#004891][3909ms][0.5506][0.3421]\n",
- "Done.\n",
- "/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\n",
- " len(cache))\n",
- "/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\n",
- " len(cache))\n",
- "/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\n",
- " len(cache))\n",
- "/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\n",
- " len(cache))\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "avAcSL_uvtq_",
- "colab_type": "text"
- },
- "source": [
- "# Convert frames"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "A3Y8K22Sv9Gn",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Convert\n",
- "Model = \"SAE\" #@param [\"SAE\", \"H128\", \"LIAEF128\", \"DF\", \"RecycleGAN\"]\n",
- "\n",
- "cmd = \"DeepFaceLab/main.py convert --input-dir workspace/data_dst --output-dir workspace/data_dst/merged --aligned-dir workspace/data_dst/aligned --model-dir workspace/model --model \"+Model\n",
- "\n",
- "%cd \"/content\"\n",
- "!python $cmd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "JNeGfiZpxlnz",
- "colab_type": "code",
- "cellView": "form",
- "colab": {}
- },
- "source": [
- "#@title Get result video and copy to Drive \n",
- "\n",
- "!python DeepFaceLab/main.py videoed video-from-sequence --input-dir workspace/data_dst/merged --output-file workspace/result.mp4 --reference-file workspace/data_dst.mp4\n",
- "!cp /content/workspace/result.mp4 /content/drive/My\\ Drive/"
- ],
- "execution_count": 0,
- "outputs": []
- }
- ]
- }
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement