Guest User

Untitled

a guest
Oct 19th, 2017
85
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 0.59 KB | None | 0 0
  1. (root) pytorch@pytorch-desktop:~/multigpu-test$
  2. (root) pytorch@pytorch-desktop:~/multigpu-test$ NCCL_DEBUG=INFO python test-simple.py
  3. Checkpoint 1
  4. Checkpoint 2
  5. INFO NCCL debug level set to INFO
  6. NCCL version 1.3.5 compiled with CUDA 9.0
  7. INFO rank 0 using buffSize = 2097152
  8. INFO rank 0 using device 0 (0000:0C:00.0)
  9. INFO rank 1 using buffSize = 2097152
  10. INFO rank 1 using device 1 (0000:0D:00.0)
  11. INFO rank access 0 -> 0 via common device
  12. INFO rank access 0 -> 1 via P2P device mem
  13. INFO rank access 1 -> 0 via P2P device mem
  14. INFO rank access 1 -> 1 via common device
  15. INFO Global device memory space is enabled
Add Comment
Please, Sign In to add comment