Advertisement
Guest User

Untitled

a guest
Jul 17th, 2019
94
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.87 KB | None | 0 0
  1. [for Spencer Wilson]
  2. full credit to Matthew Rocklin [NVIDIA]
  3. modified by Naureen Ghani for [SWC HPC]
  4.  
  5. [enter cluster]
  6. ssh ssh.swc.ucl.ac.uk -l spencerw
  7. [do not do any computation on log-in node, submit jobs here]
  8. [enter the cpu/gpu nodes to do all computation]
  9. ssh gpu-380-10
  10. module load miniconda
  11.  
  12. [setup conda env:]
  13. conda create -n dask-tutorial python=3.7 anaconda
  14. conda activate dask-tutorial
  15. conda install -c conda-forge vim
  16.  
  17. [get dask + dependencies:]
  18. conda install dask
  19. conda install -c conda-forge dask-jobqueue
  20.  
  21. [open ipython terminal:]
  22. ipython
  23. import dask
  24. import dask.distributed
  25. pip install dask-jobqueue
  26. from dask_jobqueue import SLURMCluster
  27.  
  28. [note: must specify gpu partition if needed, goes to cpu by default]
  29. cluster = SLURMCluster(queue='gpu', processes=6, cores=24, memory="2GB",
  30. env_extra=['export LANG="en_US.utf8"',
  31. 'export LANGUAGE="en_US.utf8"',
  32. 'export LC_ALL="en_US.utf8"'])
  33. cluster.scale(10) # this may take a few seconds to launch
  34. from dask.distributed import Client
  35. client = Client(cluster)
  36. client # lists processes and cores active
  37.  
  38. [open a separate terminal + ssh to log-in node:]
  39. squeue -u spencerw
  40. [should see list of 10 workers requested]
  41.  
  42. [return to ipython:]
  43. cluster.scale(20) # double amount of workers
  44.  
  45. [do simple computation to test dask:]
  46. def slow_increment(x):
  47. time.sleep(1)
  48. return(x+1)
  49.  
  50. from dask.distributed import progress
  51. futures = client.map(slow_increment, range(5000) )
  52. progress(futures)
  53.  
  54. print( cluster.job_script() )
  55. exit()
  56.  
  57. [go to input node:]
  58. ls ~/.config
  59. ls ~/.config/dask
  60. vi ~/.config/dask/jobqueue.yaml
  61.  
  62. [uncomment all commands for "slurm"]
  63.  
  64. [on input node to see full details:]
  65. sinfo --Node --long
  66.  
  67. S:C:T = socket:core:thread
  68. ifconfig # for Internet details
  69.  
  70. [edit slurm config file accordingly]
  71. [hit esc]
  72. :wq [ write + quit in vim editor]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement