Advertisement
MarcHoemberger

LOGFILE

Mar 1st, 2018
79
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 16.53 KB | None | 0 0
  1. Log file opened on Thu Mar 1 16:05:27 2018
  2. Host: gpu-compute-5-8.local pid: 15547 rank ID: 0 number of ranks: 1
  3. | | | | | :-) GROMACS - gmx mdrun, 2016.4 (-:
  4.  
  5. | | | | | | GROMACS is written by:
  6. |Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar
  7. Aldert van Buuren Rudi van Drunen Anton Feenstra Gerrit Groenhof
  8. Christoph Junghans Anca Hamuraru Vincent Hindriksen Dimitrios Karkoulis
  9. Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson
  10. Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff Erik Marklund
  11. Teemu Murtola Szilard Pall Sander Pronk Roland Schulz
  12. Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
  13. Teemu Virolainen Christian Wennberg Maarten Wolf
  14. | | | | | | and the project leaders:
  15. | Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
  16.  
  17. Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  18. Copyright (c) 2001-2017, The GROMACS development team at
  19. Uppsala University, Stockholm University and
  20. the Royal Institute of Technology, Sweden.
  21. check out http://www.gromacs.org for more information.
  22.  
  23. GROMACS is free software; you can redistribute it and/or modify it
  24. under the terms of the GNU Lesser General Public License
  25. as published by the Free Software Foundation; either version 2.1
  26. of the License, or (at your option) any later version.
  27.  
  28. GROMACS: gmx mdrun, version 2016.4
  29. Executable: /home/hoemberg/SOFTWARE/GROMACS/GMX_2016.4_AVX2/bin/gmx_mpi
  30. Data prefix: /home/hoemberg/SOFTWARE/GROMACS/GMX_2016.4_AVX2
  31. Working dir: /home/hoemberg/ADK/PRESSURE/EADK/CLOSED/TMD_KAPPA_TEST/GPU/TMD_1000_GPU
  32. Command line:
  33. gmx_mpi mdrun -pin on -pinoffset 0 -gpu_id 0 -ntomp 8 -v -cpi -s TMD_1000.tpr -o TMD_1000.trr -x TMD_1000.xtc -g TMD_1000.log -c eADK_TMD_1000.gro
  34.  
  35. GROMACS version: 2016.4
  36. Precision: single
  37. Memory model: 64 bit
  38. MPI library: MPI
  39. OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
  40. GPU support: CUDA
  41. SIMD instructions: SSE4.1
  42. FFT library: fftw-3.3.5-sse2
  43. RDTSCP usage: enabled
  44. TNG support: enabled
  45. Hwloc support: disabled
  46. Tracing support: disabled
  47. Built on: Thu Mar 1 11:37:24 EST 2018
  48. Built by: hoemberg@gpu-compute-5-8.local [CMAKE]
  49. Build OS/arch: Linux 2.6.32-573.18.1.el6.x86_64 x86_64
  50. Build CPU vendor: Intel
  51. Build CPU brand: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
  52. Build CPU family: 6 Model: 62 Stepping: 4
  53. Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
  54. C compiler: /scisoft/local/GCC-4.9.3/GCC-GFORTRAN/bin/gcc GNU 4.9.3
  55. C compiler flags: -msse4.1 -pthread -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
  56. C++ compiler: /scisoft/local/GCC-4.9.3/GCC-GFORTRAN/bin/g++ GNU 4.9.3
  57. C++ compiler flags: -msse4.1 -pthread -std=c++0x -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
  58. CUDA compiler: /scisoft/CUDA-7.5.18/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17
  59. CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;;;-Xcompiler;,-msse4.1,-pthread,,,,,;-Xcompiler;-O3,-DNDEBUG,-funroll-all-loops,-fexcess-precision=fast,,;
  60. CUDA driver: 7.50
  61. CUDA runtime: 7.50
  62.  
  63.  
  64. Running on 1 node with total 16 cores, 16 logical cores, 4 compatible GPUs
  65. Hardware detected on host gpu-compute-5-8.local (the node of MPI rank 0):
  66. CPU info:
  67. Vendor: Intel
  68. Brand: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
  69. Family: 6 Model: 62 Stepping: 4
  70. Features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
  71. SIMD instructions most likely to fit this hardware: AVX_256
  72. SIMD instructions selected at GROMACS compile time: SSE4.1
  73.  
  74. Hardware topology: Basic
  75. Sockets, cores, and logical processors:
  76. | Socket 0: [ 0] [ 1] [ 2] [ 3] [ 4] [ 5] [ 6] [ 7]
  77. | Socket 1: [ 8] [ 9] [ 10] [ 11] [ 12] [ 13] [ 14] [ 15]
  78. GPU info:
  79. Number of GPUs detected: 4
  80. #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
  81. #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
  82. #2: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
  83. #3: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
  84.  
  85.  
  86. Binary not matching hardware - you might be losing performance.
  87. SIMD instructions most likely to fit this hardware: AVX_256
  88. SIMD instructions selected at GROMACS compile time: SSE4.1
  89. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  90. M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.
  91. Lindahl
  92. GROMACS: High performance molecular simulations through multi-level
  93. parallelism from laptops to supercomputers
  94. SoftwareX 1 (2015) pp. 19-25
  95. -------- -------- --- Thank You --- -------- --------
  96.  
  97.  
  98. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  99. S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
  100. Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
  101. GROMACS
  102. In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
  103. -------- -------- --- Thank You --- -------- --------
  104.  
  105.  
  106. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  107. S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
  108. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
  109. GROMACS 4.5: a high-throughput and highly parallel open source molecular
  110. simulation toolkit
  111. Bioinformatics 29 (2013) pp. 845-54
  112. -------- -------- --- Thank You --- -------- --------
  113.  
  114.  
  115. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  116. B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
  117. GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
  118. molecular simulation
  119. J. Chem. Theory Comput. 4 (2008) pp. 435-447
  120. -------- -------- --- Thank You --- -------- --------
  121.  
  122.  
  123. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  124. D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
  125. Berendsen
  126. GROMACS: Fast, Flexible and Free
  127. J. Comp. Chem. 26 (2005) pp. 1701-1719
  128. -------- -------- --- Thank You --- -------- --------
  129.  
  130.  
  131. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  132. E. Lindahl and B. Hess and D. van der Spoel
  133. GROMACS 3.0: A package for molecular simulation and trajectory analysis
  134. J. Mol. Mod. 7 (2001) pp. 306-317
  135. -------- -------- --- Thank You --- -------- --------
  136.  
  137.  
  138. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  139. H. J. C. Berendsen, D. van der Spoel and R. van Drunen
  140. GROMACS: A message-passing parallel molecular dynamics implementation
  141. Comp. Phys. Comm. 91 (1995) pp. 43-56
  142. -------- -------- --- Thank You --- -------- --------
  143.  
  144.  
  145. For optimal performance with a GPU nstlist (now 10) should be larger.
  146. The optimum depends on your CPU and GPU resources.
  147. You might want to try several nstlist values.
  148. Changing nstlist from 10 to 40, rlist from 1.2 to 1.28
  149.  
  150. Input Parameters:
  151. integrator = md
  152. tinit = 0
  153. dt = 0.002
  154. nsteps = 26000000
  155. init-step = 0
  156. simulation-part = 1
  157. comm-mode = Linear
  158. nstcomm = 10
  159. bd-fric = 0
  160. ld-seed = -297470968
  161. emtol = 10
  162. emstep = 0.01
  163. niter = 20
  164. fcstep = 0
  165. nstcgsteep = 1000
  166. nbfgscorr = 10
  167. rtpi = 0.05
  168. nstxout = 500000
  169. nstvout = 500000
  170. nstfout = 0
  171. nstlog = 2500
  172. nstcalcenergy = 10
  173. nstenergy = 2500
  174. nstxout-compressed = 2500
  175. compressed-x-precision = 1000
  176. cutoff-scheme = Verlet
  177. nstlist = 40
  178. ns-type = Grid
  179. pbc = xyz
  180. periodic-molecules = false
  181. verlet-buffer-tolerance = 0.005
  182. rlist = 1.28
  183. coulombtype = PME
  184. coulomb-modifier = Potential-shift
  185. rcoulomb-switch = 0
  186. rcoulomb = 1.2
  187. epsilon-r = 1
  188. epsilon-rf = inf
  189. vdw-type = Cut-off
  190. vdw-modifier = Force-switch
  191. rvdw-switch = 1
  192. rvdw = 1.2
  193. DispCorr = No
  194. table-extension = 1
  195. fourierspacing = 0.12
  196. fourier-nx = 72
  197. fourier-ny = 72
  198. fourier-nz = 72
  199. pme-order = 6
  200. ewald-rtol = 1e-05
  201. ewald-rtol-lj = 0.001
  202. lj-pme-comb-rule = Geometric
  203. ewald-geometry = 0
  204. epsilon-surface = 0
  205. implicit-solvent = No
  206. gb-algorithm = Still
  207. nstgbradii = 1
  208. rgbradii = 1
  209. gb-epsilon-solvent = 80
  210. gb-saltconc = 0
  211. gb-obc-alpha = 1
  212. gb-obc-beta = 0.8
  213. gb-obc-gamma = 4.85
  214. gb-dielectric-offset = 0.009
  215. sa-algorithm = Ace-approximation
  216. sa-surface-tension = 2.05016
  217. tcoupl = Nose-Hoover
  218. nsttcouple = 10
  219. nh-chain-length = 1
  220. print-nose-hoover-chain-variables = false
  221. pcoupl = Parrinello-Rahman
  222. pcoupltype = Isotropic
  223. nstpcouple = 10
  224. tau-p = 1
  225. compressibility (3x3):
  226. | compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
  227. | compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
  228. | compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
  229. ref-p (3x3):
  230. | ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
  231. | ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
  232. | ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
  233. refcoord-scaling = COM
  234. posres-com (3):
  235. | posres-com[0]= 5.62934e-01
  236. | posres-com[1]= 4.22365e-01
  237. | posres-com[2]= 4.54117e-01
  238. posres-comB (3):
  239. | posres-comB[0]= 5.62934e-01
  240. QMMM = false
  241. QMconstraints = 0
  242. QMMMscheme = 0
  243. MMChargeScaleFactor = 1
  244. qm-opts:
  245. ngQM = 0
  246. constraint-algorithm = Lincs
  247. continuation = false
  248. Shake-SOR = false
  249. shake-tol = 0.0001
  250. lincs-order = 4
  251. lincs-iter = 1
  252. lincs-warnangle = 30
  253. nwall = 0
  254. wall-type = 9-3
  255. wall-r-linpot = -1
  256. wall-atomtype[0] = -1
  257. wall-atomtype[1] = -1
  258. wall-density[0] = 0
  259. wall-density[1] = 0
  260. wall-ewald-zfac = 3
  261. pull = false
  262. rotation = false
  263. interactiveMD = false
  264. disre = No
  265. disre-weighting = Conservative
  266. disre-mixed = false
  267. dr-fc = 1000
  268. dr-tau = 0
  269. nstdisreout = 100
  270. orire-fc = 0
  271. orire-tau = 0
  272. nstorireout = 100
  273. free-energy = no
  274. cos-acceleration = 0
  275. deform (3x3):
  276. | deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  277. | deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  278. | deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  279. simulated-tempering = false
  280. E-x:
  281. | n = 0
  282. E-xt:
  283. | n = 0
  284. E-y:
  285. | n = 0
  286. E-yt:
  287. | n = 0
  288. E-z:
  289. | n = 0
  290. E-zt:
  291. | n = 0
  292. swapcoords = no
  293. userint1 = 0
  294. userint2 = 0
  295. userint3 = 0
  296. userint4 = 0
  297. userreal1 = 0
  298. userreal2 = 0
  299. userreal3 = 0
  300. userreal4 = 0
  301. grpopts:
  302. nrdf: 65676
  303. ref-t: 298
  304. tau-t: 0.6
  305. annealing: No
  306. annealing-npoints: 0
  307. acc: 0 0 0
  308. nfreeze: N N N
  309. energygrp-flags[ 0]: 0 0
  310. energygrp-flags[ 1]: 0 0
  311.  
  312. Using 1 MPI process
  313. Using 8 OpenMP threads
  314.  
  315. 1 GPU user-selected for this run.
  316. Mapping of GPU ID to the 1 PP rank in this node: 0
  317.  
  318. Will do PME sum in reciprocal space for electrostatic interactions.
  319.  
  320. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  321. U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
  322. A smooth particle mesh Ewald method
  323. J. Chem. Phys. 103 (1995) pp. 8577-8592
  324. -------- -------- --- Thank You --- -------- --------
  325.  
  326. Will do ordinary reciprocal space Ewald sum.
  327. Using a Gaussian width (1/beta) of 0.384195 nm for Ewald
  328. Cut-off's: NS: 1.28 Coulomb: 1.2 LJ: 1.2
  329. System total charge: 0.000
  330. Generated table with 1140 data points for 1-4 COUL.
  331. Tabscale = 500 points/nm
  332. Generated table with 1140 data points for 1-4 LJ6.
  333. Tabscale = 500 points/nm
  334. Generated table with 1140 data points for 1-4 LJ12.
  335. Tabscale = 500 points/nm
  336. Potential shift: LJ r^-12: -2.648e-01 r^-6: -5.349e-01, Ewald -8.333e-06
  337. Initialized non-bonded Ewald correction tables, spacing: 1.02e-03 size: 1176
  338.  
  339.  
  340. Using GPU 8x8 non-bonded kernels
  341.  
  342.  
  343. NOTE: With GPUs, reporting energy group contributions is not supported
  344. NOTE: With GPUs, reporting energy group contributions is not supported
  345.  
  346. Removing pbc first time
  347.  
  348. Overriding thread affinity set outside gmx mdrun
  349.  
  350. Pinning threads with an auto-selected logical core stride of 1
  351.  
  352. Initializing LINear Constraint Solver
  353.  
  354. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  355. B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
  356. LINCS: A Linear Constraint Solver for molecular simulations
  357. J. Comp. Chem. 18 (1997) pp. 1463-1472
  358. -------- -------- --- Thank You --- -------- --------
  359.  
  360. The number of constraints is 3447
  361.  
  362. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  363. S. Miyamoto and P. A. Kollman
  364. SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
  365. Water Models
  366. J. Comp. Chem. 13 (1992) pp. 952-962
  367. -------- -------- --- Thank You --- -------- --------
  368.  
  369. Intra-simulation communication will occur every 10 steps.
  370. Center of mass motion removal mode is Linear
  371. We have the following groups for center of mass motion removal:
  372. 0: System
  373. There are: 32849 Atoms
  374.  
  375. Constraining the starting coordinates (step 0)
  376.  
  377. Constraining the coordinates at t0-dt (step 0)
  378. RMS relative constraint deviation after constraining: 1.74e-06
  379. Initial temperature: 4.17475e-05 K
  380.  
  381. Started mdrun on rank 0 Thu Mar 1 16:05:31 2018
  382. | | Step Time
  383. | | | 0 0.00000
  384.  
  385. Energies (kJ/mol)
  386. | | U-B Proper Dih. Improper Dih. CMAP Dih. LJ-14
  387. 8.35075e+03 8.52456e+03 4.70575e+02 -1.97674e+02 3.13716e+03
  388. |Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Position Rest.
  389. 3.11791e+04 3.80088e+04 -5.28805e+05 1.60428e+03 1.10424e-04
  390. | Potential Kinetic En. Total Energy Temperature Pressure (bar)
  391. -4.37728e+05 1.29531e+03 -4.36432e+05 4.74422e+00 -7.64626e+02
  392. Constr. rmsd
  393. 1.84421e-05
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement