Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- (root) pytorch@pytorch-desktop:~/multigpu-test$
- (root) pytorch@pytorch-desktop:~/multigpu-test$ NCCL_DEBUG=INFO python test-simple.py
- Checkpoint 1
- Checkpoint 2
- INFO NCCL debug level set to INFO
- NCCL version 1.3.5 compiled with CUDA 9.0
- INFO rank 0 using buffSize = 2097152
- INFO rank 0 using device 0 (0000:0C:00.0)
- INFO rank 1 using buffSize = 2097152
- INFO rank 1 using device 1 (0000:0D:00.0)
- INFO rank access 0 -> 0 via common device
- INFO rank access 0 -> 1 via P2P device mem
- INFO rank access 1 -> 0 via P2P device mem
- INFO rank access 1 -> 1 via common device
- INFO Global device memory space is enabled
Add Comment
Please, Sign In to add comment