Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Log file opened on Mon Oct 17 11:18:33 2016
- Host: compute-0-2.local pid: 23648 rank ID: 0 number of ranks: 2
- :-) GROMACS - mdrun_mpi, VERSION 5.1 (-:
- GROMACS is written by:
- Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar
- Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch
- Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen
- Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner
- Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff
- Erik Marklund Teemu Murtola Szilard Pall Sander Pronk
- Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers
- Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf
- and the project leaders:
- Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
- Copyright (c) 1991-2000, University of Groningen, The Netherlands.
- Copyright (c) 2001-2015, The GROMACS development team at
- Uppsala University, Stockholm University and
- the Royal Institute of Technology, Sweden.
- check out http://www.gromacs.org for more information.
- GROMACS is free software; you can redistribute it and/or modify it
- under the terms of the GNU Lesser General Public License
- as published by the Free Software Foundation; either version 2.1
- of the License, or (at your option) any later version.
- GROMACS: mdrun_mpi, VERSION 5.1
- Executable: /share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi
- Data prefix: /share/apps/chemistry/gromacs-5.1
- Command line:
- mdrun_mpi -v
- GROMACS version: VERSION 5.1
- Precision: single
- Memory model: 64 bit
- MPI library: MPI
- OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
- GPU support: disabled
- OpenCL support: disabled
- invsqrt routine: gmx_software_invsqrt(x)
- SIMD instructions: AVX_128_FMA
- FFT library: fftw-3.3.4-sse2-avx
- RDTSCP usage: enabled
- C++11 compilation: disabled
- TNG support: enabled
- Tracing support: disabled
- Built on: Mon Oct 10 18:52:35 IRST 2016
- Built by: mahmood@cluster.abd.edu [CMAKE]
- Build OS/arch: Linux 2.6.32-279.14.1.el6.x86_64 x86_64
- Build CPU vendor: AuthenticAMD
- Build CPU brand: AMD Opteron(tm) Processor 6380
- Build CPU family: 21 Model: 2 Stepping: 0
- Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c fma fma4 htt lahf_lm misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4a sse4.1 sse4.2 ssse3 xop
- C compiler: /share/apps/computer/openmpi-2.0.1/bin/mpicc GNU 4.4.7
- C compiler flags: -mavx -mfma4 -mxop -Wundef -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall -Wno-unused -Wunused-value -Wunused-parameter -O3 -DNDEBUG -funroll-all-loops -Wno-array-bounds
- C++ compiler: /share/apps/computer/openmpi-2.0.1/bin/mpic++ GNU 4.4.7
- C++ compiler flags: -mavx -mfma4 -mxop -Wundef -Wextra -Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function -O3 -DNDEBUG -funroll-all-loops -Wno-array-bounds
- Boost version: 1.55.0 (internal)
- Number of logical cores detected (32) does not match the number reported by OpenMP (16).
- Consider setting the launch configuration manually!
- Running on 1 node with total 32 cores, 32 logical cores
- Hardware detected on host compute-0-2.local (the node of MPI rank 0):
- CPU info:
- Vendor: AuthenticAMD
- Brand: AMD Opteron(tm) Processor 6282 SE
- Family: 21 model: 1 stepping: 2
- CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4a sse4.1 sse4.2 ssse3 xop
- SIMD instructions most likely to fit this hardware: AVX_128_FMA
- SIMD instructions selected at GROMACS compile time: AVX_128_FMA
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
- Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
- GROMACS
- In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
- -------- -------- --- Thank You --- -------- --------
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
- Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
- GROMACS 4.5: a high-throughput and highly parallel open source molecular
- simulation toolkit
- Bioinformatics 29 (2013) pp. 845-54
- -------- -------- --- Thank You --- -------- --------
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
- GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
- molecular simulation
- J. Chem. Theory Comput. 4 (2008) pp. 435-447
- -------- -------- --- Thank You --- -------- --------
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
- Berendsen
- GROMACS: Fast, Flexible and Free
- J. Comp. Chem. 26 (2005) pp. 1701-1719
- -------- -------- --- Thank You --- -------- --------
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- E. Lindahl and B. Hess and D. van der Spoel
- GROMACS 3.0: A package for molecular simulation and trajectory analysis
- J. Mol. Mod. 7 (2001) pp. 306-317
- -------- -------- --- Thank You --- -------- --------
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- H. J. C. Berendsen, D. van der Spoel and R. van Drunen
- GROMACS: A message-passing parallel molecular dynamics implementation
- Comp. Phys. Comm. 91 (1995) pp. 43-56
- -------- -------- --- Thank You --- -------- --------
- Input Parameters:
- integrator = md
- tinit = 0
- dt = 0.001
- nsteps = 50000000
- init-step = 0
- simulation-part = 1
- comm-mode = Linear
- nstcomm = 100
- bd-fric = 0
- ld-seed = 1993
- emtol = 10
- emstep = 0.01
- niter = 20
- fcstep = 0
- nstcgsteep = 1000
- nbfgscorr = 10
- rtpi = 0.05
- nstxout = 20000
- nstvout = 20000
- nstfout = 0
- nstlog = 20000
- nstcalcenergy = 100
- nstenergy = 20000
- nstxout-compressed = 0
- compressed-x-precision = 1000
- cutoff-scheme = Group
- nstlist = 10
- ns-type = Grid
- pbc = xyz
- periodic-molecules = FALSE
- verlet-buffer-tolerance = 0.005
- rlist = 1.2
- rlistlong = 1.4
- nstcalclr = 10
- coulombtype = PME
- coulomb-modifier = None
- rcoulomb-switch = 0
- rcoulomb = 1.2
- epsilon-r = 1
- epsilon-rf = inf
- vdw-type = Cut-off
- vdw-modifier = None
- rvdw-switch = 0
- rvdw = 1.4
- DispCorr = No
- table-extension = 1
- fourierspacing = 0.12
- fourier-nx = 64
- fourier-ny = 80
- fourier-nz = 64
- pme-order = 4
- ewald-rtol = 1e-05
- ewald-rtol-lj = 1e-05
- lj-pme-comb-rule = Geometric
- ewald-geometry = 0
- epsilon-surface = 0
- implicit-solvent = No
- gb-algorithm = Still
- nstgbradii = 1
- rgbradii = 1
- gb-epsilon-solvent = 80
- gb-saltconc = 0
- gb-obc-alpha = 1
- gb-obc-beta = 0.8
- gb-obc-gamma = 4.85
- gb-dielectric-offset = 0.009
- sa-algorithm = Ace-approximation
- sa-surface-tension = 2.05016
- tcoupl = Berendsen
- nsttcouple = 10
- nh-chain-length = 0
- print-nose-hoover-chain-variables = FALSE
- pcoupl = Berendsen
- pcoupltype = Isotropic
- nstpcouple = 10
- tau-p = 0.5
- compressibility (3x3):
- compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
- compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
- compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
- ref-p (3x3):
- ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
- ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
- ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
- refcoord-scaling = No
- posres-com (3):
- posres-com[0]= 0.00000e+00
- posres-com[1]= 0.00000e+00
- posres-com[2]= 0.00000e+00
- posres-comB (3):
- posres-comB[0]= 0.00000e+00
- posres-comB[1]= 0.00000e+00
- posres-comB[2]= 0.00000e+00
- QMMM = FALSE
- QMconstraints = 0
- QMMMscheme = 0
- MMChargeScaleFactor = 1
- qm-opts:
- ngQM = 0
- constraint-algorithm = Lincs
- continuation = FALSE
- Shake-SOR = FALSE
- shake-tol = 0.0001
- lincs-order = 4
- lincs-iter = 1
- lincs-warnangle = 30
- nwall = 0
- wall-type = 9-3
- wall-r-linpot = -1
- wall-atomtype[0] = -1
- wall-atomtype[1] = -1
- wall-density[0] = 0
- wall-density[1] = 0
- wall-ewald-zfac = 3
- pull = FALSE
- rotation = FALSE
- interactiveMD = FALSE
- disre = No
- disre-weighting = Conservative
- disre-mixed = FALSE
- dr-fc = 1000
- dr-tau = 0
- nstdisreout = 100
- orire-fc = 0
- orire-tau = 0
- nstorireout = 100
- free-energy = no
- cos-acceleration = 0
- deform (3x3):
- deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
- deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
- deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
- simulated-tempering = FALSE
- E-x:
- n = 0
- E-xt:
- n = 0
- E-y:
- n = 0
- E-yt:
- n = 0
- E-z:
- n = 0
- E-zt:
- n = 0
- swapcoords = no
- adress = FALSE
- userint1 = 0
- userint2 = 0
- userint3 = 0
- userint4 = 0
- userreal1 = 0
- userreal2 = 0
- userreal3 = 0
- userreal4 = 0
- grpopts:
- nrdf: 100053
- ref-t: 310
- tau-t: 0.1
- annealing: No
- annealing-npoints: 0
- acc: 0 0 0
- nfreeze: N N N
- energygrp-flags[ 0]: 0
- Initializing Domain Decomposition on 2 ranks
- Dynamic load balancing: auto
- Will sort the charge groups at every domain (re)decomposition
- Initial maximum inter charge-group distances:
- two-body bonded interactions: 0.692 nm, LJ-14, atoms 5947 5952
- multi-body bonded interactions: 0.616 nm, G96Angle, atoms 5947 5948
- Minimum cell size due to bonded interactions: 0.678 nm
- Using 0 separate PME ranks, as there are too few total
- ranks for efficient splitting
- Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
- Optimizing the DD grid for 2 cells with a minimum initial size of 0.847 nm
- The maximum allowed number of cells is: X 9 Y 10 Z 8
- Domain decomposition grid 1 x 2 x 1, separate PME ranks 0
- PME domain decomposition: 1 x 2 x 1
- Domain decomposition rank 0, coordinates 0 0 0
- Using 2 MPI processes
- NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
- removed in a future release when 'verlet' supports all interaction forms.
- Table routines are used for coulomb: FALSE
- Table routines are used for vdw: FALSE
- Will do PME sum in reciprocal space for electrostatic interactions.
- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
- U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
- A smooth particle mesh Ewald method
- J. Chem. Phys. 103 (1995) pp. 8577-8592
- -------- -------- --- Thank You --- -------- --------
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement