Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- :-) G R O M A C S (-:
- Georgetown Riga Oslo Madrid Amsterdam Chisinau Stockholm
- :-) VERSION 4.6.2 (-:
- Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
- Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
- Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
- Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
- Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
- Michael Shirts, Alfons Sijbers, Peter Tieleman,
- Berk Hess, David van der Spoel, and Erik Lindahl.
- Copyright (c) 1991-2000, University of Groningen, The Netherlands.
- Copyright (c) 2001-2012,2013, The GROMACS development team at
- Uppsala University & The Royal Institute of Technology, Sweden.
- check out http://www.gromacs.org for more information.
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public License
- as published by the Free Software Foundation; either version 2.1
- of the License, or (at your option) any later version.
- :-) /scratch/lustre/visvaldas/gromacs462/bin/grompp_mpi_d (double precision) (-:
- Option Filename Type Description
- ------------------------------------------------------------
- -f /scratch/lustre/home/vyra6599/emamber.mdp Input grompp input
- file with MD parameters
- -po mdout.mdp Output grompp input file with MD parameters
- -c 1oyn-1oyn6wat1994-15a_vconf-docking_1_fix_solvated_bonded_GMX.gro Input
- Structure file: gro g96 pdb tpr etc.
- -r conf.gro Input, Opt. Structure file: gro g96 pdb tpr etc.
- -rb conf.gro Input, Opt. Structure file: gro g96 pdb tpr etc.
- -n index.ndx Input, Opt. Index file
- -p 1oyn-1oyn6wat1994-15a_vconf-docking_1_fix_solvated_bonded_GMX.top Input
- Topology file
- -pp processed.top Output, Opt. Topology file
- -o em.tpr Output Run input file: tpr tpb tpa
- -t traj.trr Input, Opt. Full precision trajectory: trr trj cpt
- -e ener.edr Input, Opt. Energy file
- -ref rotref.trr In/Out, Opt. Full precision trajectory: trr trj cpt
- Option Type Value Description
- ------------------------------------------------------
- -[no]h bool no Print help info and quit
- -[no]version bool no Print version info and quit
- -nice int 0 Set the nicelevel
- -[no]v bool no Be loud and noisy
- -time real -1 Take frame at or first after this time.
- -[no]rmvsbds bool yes Remove constant bonded interactions with virtual
- sites
- -maxwarn int 2 Number of allowed warnings during input
- processing. Not for normal use and may generate
- unstable systems
- -[no]zero bool no Set parameters for bonded interactions without
- defaults to zero instead of generating an error
- -[no]renum bool yes Renumber atomtypes and minimize number of
- atomtypes
- Back Off! I just backed up mdout.mdp to ./#mdout.mdp.3#
- NOTE 1 [file /scratch/lustre/home/vyra6599/emamber.mdp]:
- nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
- nstcomm to nstcalcenergy
- Generated 666 of the 666 non-bonded parameter combinations
- Generating 1-4 interactions: fudge = 0.5
- Generated 666 of the 666 1-4 parameter combinations
- Excluding 3 bonded neighbours molecule type '1oyn-1oyn6wat1994-15a_vconf-docking_1_fix_solvated_bonded'
- Excluding 1 bonded neighbours molecule type 'NA+'
- Excluding 2 bonded neighbours molecule type 'WAT'
- NOTE 2 [file 1oyn-1oyn6wat1994-15a_vconf-docking_1_fix_solvated_bonded_GMX.top, line 50698]:
- System has non-zero total charge: -0.003802
- Total charge should normally be an integer. See
- http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
- for discussion on how close it should be to an integer.
- Number of degrees of freedom in T-Coupling group rest is 197592.00
- Estimate for the relative computational load of the PME mesh part: 0.30
- There were 2 notes
- Back Off! I just backed up em.tpr to ./#em.tpr.3#
- gcq#289: "The Candlelight Was Just Right" (Beastie Boys)
- Analysing residue names:
- There are: 334 Protein residues
- There are: 3 Other residues
- There are: 17 Ion residues
- There are: 20144 Water residues
- Analysing Protein...
- Analysing residues not classified as Protein/DNA/RNA/Water and splitting into groups...
- Analysing residues not classified as Protein/DNA/RNA/Water and splitting into groups...
- Largest charge group radii for Van der Waals: 0.040, 0.040 nm
- Largest charge group radii for Coulomb: 0.081, 0.081 nm
- Calculating fourier grid dimensions for X Y Z
- Using a fourier grid of 80x96x80, spacing 0.114 0.103 0.110
- This run will generate roughly 5 Mb of data
- :-) G R O M A C S (-:
- Gnomes, ROck Monsters And Chili Sauce
- :-) VERSION 4.6.2 (-:
- Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
- Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
- Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
- Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
- Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
- Michael Shirts, Alfons Sijbers, Peter Tieleman,
- Berk Hess, David van der Spoel, and Erik Lindahl.
- Copyright (c) 1991-2000, University of Groningen, The Netherlands.
- Copyright (c) 2001-2012,2013, The GROMACS development team at
- Uppsala University & The Royal Institute of Technology, Sweden.
- check out http://www.gromacs.org for more information.
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public License
- as published by the Free Software Foundation; either version 2.1
- of the License, or (at your option) any later version.
- :-) /scratch/lustre/visvaldas/gromacs462/bin/mdrun_mpi_d (double precision) (-:
- Option Filename Type Description
- ------------------------------------------------------------
- -s em.tpr Input Run input file: tpr tpb tpa
- -o em.trr Output Full precision trajectory: trr trj cpt
- -x em.xtc Output, Opt. Compressed trajectory (portable xdr format)
- -cpi em.cpt Input, Opt. Checkpoint file
- -cpo em.cpt Output, Opt. Checkpoint file
- -c em.gro Output Structure file: gro g96 pdb etc.
- -e em.edr Output Energy file
- -g em.log Output Log file
- -dhdl em.xvg Output, Opt. xvgr/xmgr file
- -field em.xvg Output, Opt. xvgr/xmgr file
- -table em.xvg Input, Opt. xvgr/xmgr file
- -tabletf em.xvg Input, Opt. xvgr/xmgr file
- -tablep em.xvg Input, Opt. xvgr/xmgr file
- -tableb em.xvg Input, Opt. xvgr/xmgr file
- -rerun em.trr Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt
- -tpi em.xvg Output, Opt. xvgr/xmgr file
- -tpid em.xvg Output, Opt. xvgr/xmgr file
- -ei em.edi Input, Opt. ED sampling input
- -eo em.xvg Output, Opt. xvgr/xmgr file
- -j em.gct Input, Opt. General coupling stuff
- -jo em.gct Output, Opt. General coupling stuff
- -ffout em.xvg Output, Opt. xvgr/xmgr file
- -devout em.xvg Output, Opt. xvgr/xmgr file
- -runav em.xvg Output, Opt. xvgr/xmgr file
- -px em.xvg Output, Opt. xvgr/xmgr file
- -pf em.xvg Output, Opt. xvgr/xmgr file
- -ro em.xvg Output, Opt. xvgr/xmgr file
- -ra em.log Output, Opt. Log file
- -rs em.log Output, Opt. Log file
- -rt em.log Output, Opt. Log file
- -mtx em.mtx Output, Opt. Hessian matrix
- -dn em.ndx Output, Opt. Index file
- -multidir em Input, Opt., Mult. Run directory
- -plumed em.dat Input, Opt. Generic data file
- -membed em.dat Input, Opt. Generic data file
- -mp em.top Input, Opt. Topology file
- -mn em.ndx Input, Opt. Index file
- Option Type Value Description
- ------------------------------------------------------
- -[no]h bool no Print help info and quit
- -[no]version bool no Print version info and quit
- -nice int 0 Set the nicelevel
- -deffnm string em Set the default filename for all file options
- -xvg enum xmgrace xvg plot formatting: xmgrace, xmgr or none
- -[no]pd bool no Use particle decompostion
- -dd vector 0 0 0 Domain decomposition grid, 0 is optimize
- -ddorder enum interleave DD node order: interleave, pp_pme or cartesian
- -npme int -1 Number of separate nodes to be used for PME, -1
- is guess
- -nt int 0 Total number of threads to start (0 is guess)
- -ntmpi int 0 Number of thread-MPI threads to start (0 is guess)
- -ntomp int 0 Number of OpenMP threads per MPI process/thread
- to start (0 is guess)
- -ntomp_pme int 0 Number of OpenMP threads per MPI process/thread
- to start (0 is -ntomp)
- -pin enum auto Fix threads (or processes) to specific cores:
- auto, on or off
- -pinoffset int 0 The starting logical core number for pinning to
- cores; used to avoid pinning threads from
- different mdrun instances to the same core
- -pinstride int 0 Pinning distance in logical cores for threads,
- use 0 to minimize the number of threads per
- physical core
- -gpu_id string List of GPU id's to use
- -[no]ddcheck bool yes Check for all bonded interactions with DD
- -rdd real 0 The maximum distance for bonded interactions with
- DD (nm), 0 is determine from initial coordinates
- -rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate
- -dlb enum auto Dynamic load balancing (with DD): auto, no or yes
- -dds real 0.8 Minimum allowed dlb scaling of the DD cell size
- -gcom int -1 Global communication frequency
- -nb enum auto Calculate non-bonded interactions on: auto, cpu,
- gpu or gpu_cpu
- -[no]tunepme bool yes Optimize PME load between PP/PME nodes or GPU/CPU
- -[no]testverlet bool no Test the Verlet non-bonded scheme
- -[no]v bool no Be loud and noisy
- -[no]compact bool yes Write a compact log file
- -[no]seppot bool no Write separate V and dVdl terms for each
- interaction type and node to the log file(s)
- -pforce real -1 Print all forces larger than this (kJ/mol nm)
- -[no]reprod bool no Try to avoid optimizations that affect binary
- reproducibility
- -cpt real 15 Checkpoint interval (minutes)
- -[no]cpnum bool no Keep and number checkpoint files
- -[no]append bool yes Append to previous output files when continuing
- from checkpoint instead of adding the simulation
- part number to all file names
- -nsteps step -2 Run this number of steps, overrides .mdp file
- option
- -maxh real -1 Terminate after 0.99 times this time (hours)
- -multi int 0 Do multiple simulations in parallel
- -replex int 0 Attempt replica exchange periodically with this
- period (steps)
- -nex int 0 Number of random exchanges to carry out each
- exchange interval (N^3 is one suggestion). -nex
- zero or not specified gives neighbor replica
- exchange.
- -reseed int -1 Seed for replica exchange, -1 is generate a seed
- -[no]ionize bool no Do a simulation including the effect of an X-Ray
- bombardment on your system
- Back Off! I just backed up em.log to ./#em.log.3#
- Reading file em.tpr, VERSION 4.6.2 (double precision)
- Using 12 MPI processes
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Can not set thread affinities on the current platform. On NUMA systems this
- can cause performance degradation. If you think your platform should support
- setting affinities, contact the GROMACS developers.
- Back Off! I just backed up em.trr to ./#em.trr.3#
- Back Off! I just backed up em.edr to ./#em.edr.3#
- Polak-Ribiere Conjugate Gradients:
- Tolerance (Fmax) = 1.00000e+03
- Number of steps = 1000
- F-max = 8.94698e+03 on atom 3664
- F-Norm = 9.28656e+02
- Step -1:
- The charge group starting at atom 5420 moved more than the distance allowed by the domain decomposition (0.820340) in direction Y
- distance out of cell 1.136278
- Old coordinates: 6.426 5.195 6.096
- New coordinates: 7.618 6.879 5.759
- Old cell boundaries in direction Y: 4.922 5.742
- New cell boundaries in direction Y: 4.922 5.742
- -------------------------------------------------------
- Program mdrun_mpi_d, VERSION 4.6.2
- Source code file: /scratch/lustre/visvaldas/gromacs-4.6.2/src/mdlib/domdec.c, line: 4348
- Fatal error:
- A charge group moved too far between two domain decomposition steps
- This usually means that your system is not well equilibrated
- For more information and tips for troubleshooting, please check the GROMACS
- website at http://www.gromacs.org/Documentation/Errors
- -------------------------------------------------------
- "There's Nothing We Can't Fix, 'coz We Can Do It in the Mix" (Indeep)
- Error on node 6, will try to stop all the nodes
- Halting parallel program mdrun_mpi_d on CPU 6 out of 12
- gcq#248: "There's Nothing We Can't Fix, 'coz We Can Do It in the Mix" (Indeep)
- --------------------------------------------------------------------------
- MPI_ABORT was invoked on rank 6 in communicator MPI_COMM_WORLD
- with errorcode -1.
- NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
- You may or may not see output from other processes, depending on
- exactly when Open MPI kills them.
- --------------------------------------------------------------------------
- --------------------------------------------------------------------------
- mpirun has exited due to process rank 6 with PID 31464 on
- node lxibm038 exiting without calling "finalize". This may
- have caused other processes in the application to be
- terminated by signals sent by mpirun (as reported here).
- --------------------------------------------------------------------------
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement