Advertisement
mahmoodn

gmx-nodes

Oct 17th, 2016
2,333
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 13.32 KB | None | 0 0
  1. Log file opened on Mon Oct 17 11:18:33 2016
  2. Host: compute-0-2.local pid: 23648 rank ID: 0 number of ranks: 2
  3. :-) GROMACS - mdrun_mpi, VERSION 5.1 (-:
  4.  
  5. GROMACS is written by:
  6. Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar
  7. Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch
  8. Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen
  9. Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner
  10. Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff
  11. Erik Marklund Teemu Murtola Szilard Pall Sander Pronk
  12. Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers
  13. Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf
  14. and the project leaders:
  15. Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
  16.  
  17. Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  18. Copyright (c) 2001-2015, The GROMACS development team at
  19. Uppsala University, Stockholm University and
  20. the Royal Institute of Technology, Sweden.
  21. check out http://www.gromacs.org for more information.
  22.  
  23. GROMACS is free software; you can redistribute it and/or modify it
  24. under the terms of the GNU Lesser General Public License
  25. as published by the Free Software Foundation; either version 2.1
  26. of the License, or (at your option) any later version.
  27.  
  28. GROMACS: mdrun_mpi, VERSION 5.1
  29. Executable: /share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi
  30. Data prefix: /share/apps/chemistry/gromacs-5.1
  31. Command line:
  32. mdrun_mpi -v
  33.  
  34. GROMACS version: VERSION 5.1
  35. Precision: single
  36. Memory model: 64 bit
  37. MPI library: MPI
  38. OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
  39. GPU support: disabled
  40. OpenCL support: disabled
  41. invsqrt routine: gmx_software_invsqrt(x)
  42. SIMD instructions: AVX_128_FMA
  43. FFT library: fftw-3.3.4-sse2-avx
  44. RDTSCP usage: enabled
  45. C++11 compilation: disabled
  46. TNG support: enabled
  47. Tracing support: disabled
  48. Built on: Mon Oct 10 18:52:35 IRST 2016
  49. Built by: mahmood@cluster.abd.edu [CMAKE]
  50. Build OS/arch: Linux 2.6.32-279.14.1.el6.x86_64 x86_64
  51. Build CPU vendor: AuthenticAMD
  52. Build CPU brand: AMD Opteron(tm) Processor 6380
  53. Build CPU family: 21 Model: 2 Stepping: 0
  54. Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c fma fma4 htt lahf_lm misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4a sse4.1 sse4.2 ssse3 xop
  55. C compiler: /share/apps/computer/openmpi-2.0.1/bin/mpicc GNU 4.4.7
  56. C compiler flags: -mavx -mfma4 -mxop -Wundef -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall -Wno-unused -Wunused-value -Wunused-parameter -O3 -DNDEBUG -funroll-all-loops -Wno-array-bounds
  57. C++ compiler: /share/apps/computer/openmpi-2.0.1/bin/mpic++ GNU 4.4.7
  58. C++ compiler flags: -mavx -mfma4 -mxop -Wundef -Wextra -Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function -O3 -DNDEBUG -funroll-all-loops -Wno-array-bounds
  59. Boost version: 1.55.0 (internal)
  60.  
  61.  
  62. Number of logical cores detected (32) does not match the number reported by OpenMP (16).
  63. Consider setting the launch configuration manually!
  64.  
  65. Running on 1 node with total 32 cores, 32 logical cores
  66. Hardware detected on host compute-0-2.local (the node of MPI rank 0):
  67. CPU info:
  68. Vendor: AuthenticAMD
  69. Brand: AMD Opteron(tm) Processor 6282 SE
  70. Family: 21 model: 1 stepping: 2
  71. CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4a sse4.1 sse4.2 ssse3 xop
  72. SIMD instructions most likely to fit this hardware: AVX_128_FMA
  73. SIMD instructions selected at GROMACS compile time: AVX_128_FMA
  74.  
  75.  
  76. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  77. S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
  78. Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
  79. GROMACS
  80. In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
  81. -------- -------- --- Thank You --- -------- --------
  82.  
  83.  
  84. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  85. S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
  86. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
  87. GROMACS 4.5: a high-throughput and highly parallel open source molecular
  88. simulation toolkit
  89. Bioinformatics 29 (2013) pp. 845-54
  90. -------- -------- --- Thank You --- -------- --------
  91.  
  92.  
  93. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  94. B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
  95. GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
  96. molecular simulation
  97. J. Chem. Theory Comput. 4 (2008) pp. 435-447
  98. -------- -------- --- Thank You --- -------- --------
  99.  
  100.  
  101. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  102. D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
  103. Berendsen
  104. GROMACS: Fast, Flexible and Free
  105. J. Comp. Chem. 26 (2005) pp. 1701-1719
  106. -------- -------- --- Thank You --- -------- --------
  107.  
  108.  
  109. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  110. E. Lindahl and B. Hess and D. van der Spoel
  111. GROMACS 3.0: A package for molecular simulation and trajectory analysis
  112. J. Mol. Mod. 7 (2001) pp. 306-317
  113. -------- -------- --- Thank You --- -------- --------
  114.  
  115.  
  116. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  117. H. J. C. Berendsen, D. van der Spoel and R. van Drunen
  118. GROMACS: A message-passing parallel molecular dynamics implementation
  119. Comp. Phys. Comm. 91 (1995) pp. 43-56
  120. -------- -------- --- Thank You --- -------- --------
  121.  
  122. Input Parameters:
  123. integrator = md
  124. tinit = 0
  125. dt = 0.001
  126. nsteps = 50000000
  127. init-step = 0
  128. simulation-part = 1
  129. comm-mode = Linear
  130. nstcomm = 100
  131. bd-fric = 0
  132. ld-seed = 1993
  133. emtol = 10
  134. emstep = 0.01
  135. niter = 20
  136. fcstep = 0
  137. nstcgsteep = 1000
  138. nbfgscorr = 10
  139. rtpi = 0.05
  140. nstxout = 20000
  141. nstvout = 20000
  142. nstfout = 0
  143. nstlog = 20000
  144. nstcalcenergy = 100
  145. nstenergy = 20000
  146. nstxout-compressed = 0
  147. compressed-x-precision = 1000
  148. cutoff-scheme = Group
  149. nstlist = 10
  150. ns-type = Grid
  151. pbc = xyz
  152. periodic-molecules = FALSE
  153. verlet-buffer-tolerance = 0.005
  154. rlist = 1.2
  155. rlistlong = 1.4
  156. nstcalclr = 10
  157. coulombtype = PME
  158. coulomb-modifier = None
  159. rcoulomb-switch = 0
  160. rcoulomb = 1.2
  161. epsilon-r = 1
  162. epsilon-rf = inf
  163. vdw-type = Cut-off
  164. vdw-modifier = None
  165. rvdw-switch = 0
  166. rvdw = 1.4
  167. DispCorr = No
  168. table-extension = 1
  169. fourierspacing = 0.12
  170. fourier-nx = 64
  171. fourier-ny = 80
  172. fourier-nz = 64
  173. pme-order = 4
  174. ewald-rtol = 1e-05
  175. ewald-rtol-lj = 1e-05
  176. lj-pme-comb-rule = Geometric
  177. ewald-geometry = 0
  178. epsilon-surface = 0
  179. implicit-solvent = No
  180. gb-algorithm = Still
  181. nstgbradii = 1
  182. rgbradii = 1
  183. gb-epsilon-solvent = 80
  184. gb-saltconc = 0
  185. gb-obc-alpha = 1
  186. gb-obc-beta = 0.8
  187. gb-obc-gamma = 4.85
  188. gb-dielectric-offset = 0.009
  189. sa-algorithm = Ace-approximation
  190. sa-surface-tension = 2.05016
  191. tcoupl = Berendsen
  192. nsttcouple = 10
  193. nh-chain-length = 0
  194. print-nose-hoover-chain-variables = FALSE
  195. pcoupl = Berendsen
  196. pcoupltype = Isotropic
  197. nstpcouple = 10
  198. tau-p = 0.5
  199. compressibility (3x3):
  200. compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
  201. compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
  202. compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
  203. ref-p (3x3):
  204. ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
  205. ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
  206. ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
  207. refcoord-scaling = No
  208. posres-com (3):
  209. posres-com[0]= 0.00000e+00
  210. posres-com[1]= 0.00000e+00
  211. posres-com[2]= 0.00000e+00
  212. posres-comB (3):
  213. posres-comB[0]= 0.00000e+00
  214. posres-comB[1]= 0.00000e+00
  215. posres-comB[2]= 0.00000e+00
  216. QMMM = FALSE
  217. QMconstraints = 0
  218. QMMMscheme = 0
  219. MMChargeScaleFactor = 1
  220. qm-opts:
  221. ngQM = 0
  222. constraint-algorithm = Lincs
  223. continuation = FALSE
  224. Shake-SOR = FALSE
  225. shake-tol = 0.0001
  226. lincs-order = 4
  227. lincs-iter = 1
  228. lincs-warnangle = 30
  229. nwall = 0
  230. wall-type = 9-3
  231. wall-r-linpot = -1
  232. wall-atomtype[0] = -1
  233. wall-atomtype[1] = -1
  234. wall-density[0] = 0
  235. wall-density[1] = 0
  236. wall-ewald-zfac = 3
  237. pull = FALSE
  238. rotation = FALSE
  239. interactiveMD = FALSE
  240. disre = No
  241. disre-weighting = Conservative
  242. disre-mixed = FALSE
  243. dr-fc = 1000
  244. dr-tau = 0
  245. nstdisreout = 100
  246. orire-fc = 0
  247. orire-tau = 0
  248. nstorireout = 100
  249. free-energy = no
  250. cos-acceleration = 0
  251. deform (3x3):
  252. deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  253. deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  254. deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
  255. simulated-tempering = FALSE
  256. E-x:
  257. n = 0
  258. E-xt:
  259. n = 0
  260. E-y:
  261. n = 0
  262. E-yt:
  263. n = 0
  264. E-z:
  265. n = 0
  266. E-zt:
  267. n = 0
  268. swapcoords = no
  269. adress = FALSE
  270. userint1 = 0
  271. userint2 = 0
  272. userint3 = 0
  273. userint4 = 0
  274. userreal1 = 0
  275. userreal2 = 0
  276. userreal3 = 0
  277. userreal4 = 0
  278. grpopts:
  279. nrdf: 100053
  280. ref-t: 310
  281. tau-t: 0.1
  282. annealing: No
  283. annealing-npoints: 0
  284. acc: 0 0 0
  285. nfreeze: N N N
  286. energygrp-flags[ 0]: 0
  287.  
  288.  
  289. Initializing Domain Decomposition on 2 ranks
  290. Dynamic load balancing: auto
  291. Will sort the charge groups at every domain (re)decomposition
  292. Initial maximum inter charge-group distances:
  293. two-body bonded interactions: 0.692 nm, LJ-14, atoms 5947 5952
  294. multi-body bonded interactions: 0.616 nm, G96Angle, atoms 5947 5948
  295. Minimum cell size due to bonded interactions: 0.678 nm
  296. Using 0 separate PME ranks, as there are too few total
  297. ranks for efficient splitting
  298. Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
  299. Optimizing the DD grid for 2 cells with a minimum initial size of 0.847 nm
  300. The maximum allowed number of cells is: X 9 Y 10 Z 8
  301. Domain decomposition grid 1 x 2 x 1, separate PME ranks 0
  302. PME domain decomposition: 1 x 2 x 1
  303. Domain decomposition rank 0, coordinates 0 0 0
  304.  
  305. Using 2 MPI processes
  306.  
  307.  
  308. NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
  309. removed in a future release when 'verlet' supports all interaction forms.
  310.  
  311. Table routines are used for coulomb: FALSE
  312. Table routines are used for vdw: FALSE
  313. Will do PME sum in reciprocal space for electrostatic interactions.
  314.  
  315. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
  316. U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
  317. A smooth particle mesh Ewald method
  318. J. Chem. Phys. 103 (1995) pp. 8577-8592
  319. -------- -------- --- Thank You --- -------- --------
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement