Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # TensorFlow/PyTorch + GPU + Docker
- The steps described here was made and tested in **Ubuntu 18.04 x64**, and it's main purpose is make easy to prepare environment from scratch to play with Deep Learning on TensorFlow/PyTorch.
- ## Steps to prepare environment
- 1. Update system
- `$> sudo apt-get -y update`
- 2. Install requirements
- `$> sudo apt install -y python3-pip curl`
- 3. Install getgist
- `$> pip3 install getgist`
- > If you had any issue with getgist REBOOT your system
- 4. Install Miniconda
- `$> getgist rodrigocmoraes install-miniconda.sh`
- `$> bash install-miniconda.sh`
- > Execute the steps below manually when necessary:
- > * Enter
- > * yes
- > * Enter
- > * yes
- > After installation execute command below:
- `$> source ~/.`
- 5. Install NVidia Driver
- `$> getgist rodrigocmoraes install-nvidia-driver.sh`
- `$> bash install-nvidia-driver.sh`
- > Execute the steps below manually when necessary;
- > * Enter
- 6. Install Docker/Docker Compose
- `$> getgist rodrigocmoraes install-docker.sh`
- `$> bash install-docker.sh`
- `$> sudo usermod -aG docker $USER`
- `$> sudo reboot`
- 7. Install NVidia Docker
- `$> getgist rodrigocmoraes install-nvidia-docker.sh`
- `$> bash install-nvidia-docker.sh`
- `$> sudo reboot`
- 8. Create
- 8.1. TensorFlow - GPU:
- `$> getgist rodrigocmoraes spec-file-tensorflow-gpu.txt`
- `$> conda create --name tensorflow-gpu --file spec-file-tensorflow-gpu.txt python=3.6.8`
- `$> conda activate tensorflow-gpu`
- `$> pip install opencv-python`
- `$> conda deactivate`
- 8.2. PyTorch - GPU:
- `$> getgist rodrigocmoraes spec-file-pytorch-gpu.txt`
- `$> conda create --name pytorch-gpu --file spec-file-pytorch-gpu.txt python=3.6.8`
- `$> conda activate pytorch-gpu`
- `$> pip install opencv-python future`
- `$> conda deactivate`
- ## Test environments:
- * TensorFlow - GPU:
- ```python
- from tensorflow.python.client import device_lib
- def get_available_gpus():
- local_device_protos = device_lib.list_local_devices()
- return [x.name for x in local_device_protos if x.device_type == 'GPU']
- print(get_available_gpus())
- ```
- > Expected result:
- `>>> ['/device:GPU:0']`
- * PyTorch - GPU:
- ```python
- import torch
- id = torch.cura.current_device()
- print(id)
- print(torch.cuda.get_device_name(id))
- ```
- ## Most used *conda* commands:
- * List conda existing conda environment:
- `$> conda env list`
- * Activate conda environment:
- `$> conda activate ENVIRONMENT_NAME`
- * Deactivate conda environment:
- `$> conda deactivate`
- * Install packages into conda environment:
- `$> conda install --name ENVIRONMENT_NAME PACKAGE[==X.YY.ZZ]`
- or
- `$> conda activate ENVIRONMENT_NAME`
- `$> conda install PACKAGE[==X.YY.ZZ]`
- or from *spec-file.txt*
- `$> conda install --name ENVIRONMENT_NAME PACKAGE[==X.YY.ZZ] --file spec-file.txt`
- or from *requirements.txt*
- `$> while read requirement; do conda install --yes $requirement; done < requirements.txt`
- * Export environment specification:
- * From **conda** package manager:
- `$> conda list --explicit > spec-file-${CONDA_DEFAULT_ENV}.txt`
- * From **pip** package manager:
- `$> pip freeze > requirements-${CONDA_DEFAULT_ENV}.txt`
- * Clone environment:
- `$> conda create --name NEW_ENV_NAME --clone ENV_THAT_WILL_BE_CLONNED`
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement