Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357106] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 23: PullTest
- 23/39 Test #23: PullTest .........................***Failed 0.02 sec
- [node3102.skitty.os:357108] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357108] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 24: AwhTest
- 24/39 Test #24: AwhTest ..........................***Failed 0.02 sec
- [node3102.skitty.os:357110] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357110] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 25: SimdUnitTests
- 25/39 Test #25: SimdUnitTests ....................***Failed 0.02 sec
- [node3102.skitty.os:357112] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357112] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 26: CompatibilityHelpersTests
- 26/39 Test #26: CompatibilityHelpersTests ........***Failed 0.02 sec
- [node3102.skitty.os:357114] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357114] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 27: GmxAnaTest
- 27/39 Test #27: GmxAnaTest .......................***Failed 0.02 sec
- [node3102.skitty.os:357116] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357116] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 28: GmxPreprocessTests
- 28/39 Test #28: GmxPreprocessTests ...............***Failed 0.02 sec
- [node3102.skitty.os:357118] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357118] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 29: Pdb2gmxTest
- 29/39 Test #29: Pdb2gmxTest ......................***Failed 0.02 sec
- [node3102.skitty.os:357120] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357120] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 30: CorrelationsTest
- 30/39 Test #30: CorrelationsTest .................***Failed 0.02 sec
- [node3102.skitty.os:357122] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357122] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 31: AnalysisDataUnitTests
- 31/39 Test #31: AnalysisDataUnitTests ............***Failed 0.02 sec
- [node3102.skitty.os:357124] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357124] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 32: SelectionUnitTests
- 32/39 Test #32: SelectionUnitTests ...............***Failed 0.02 sec
- [node3102.skitty.os:357126] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357126] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 33: TrajectoryAnalysisUnitTests
- 33/39 Test #33: TrajectoryAnalysisUnitTests ......***Failed 0.02 sec
- [node3102.skitty.os:357128] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357128] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 34: EnergyAnalysisUnitTests
- 34/39 Test #34: EnergyAnalysisUnitTests ..........***Failed 0.02 sec
- [node3102.skitty.os:357130] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357130] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 35: ToolUnitTests
- 35/39 Test #35: ToolUnitTests ....................***Failed 0.02 sec
- [node3102.skitty.os:357132] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357132] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 36: MdrunTests
- 36/39 Test #36: MdrunTests .......................***Failed 0.03 sec
- [node3102.skitty.os:357134] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357134] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 37: MdrunNonIntegratorTests
- 37/39 Test #37: MdrunNonIntegratorTests ..........***Failed 0.03 sec
- [node3102.skitty.os:357136] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357136] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 38: LegacyGroupSchemeMdrunTests
- 38/39 Test #38: LegacyGroupSchemeMdrunTests ......***Failed 0.02 sec
- [node3102.skitty.os:357138] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357138] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- Start 39: MdrunMpiTests
- 39/39 Test #39: MdrunMpiTests ....................***Failed 0.03 sec
- [node3102.skitty.os:357140] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init_thread
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [node3102.skitty.os:357140] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- 8% tests passed, 36 tests failed out of 39
- Label Time Summary:
- GTest = 1.87 sec*proc (39 tests)
- IntegrationTest = 0.13 sec*proc (5 tests)
- MpiTest = 1.03 sec*proc (3 tests)
- SlowTest = 0.02 sec*proc (1 test)
- UnitTest = 1.72 sec*proc (33 tests)
- Total Test time (real) = 1.89 sec
- The following tests FAILED:
- 1 - TestUtilsUnitTests (Failed)
- 3 - MdlibUnitTest (Failed)
- 4 - AppliedForcesUnitTest (Failed)
- 5 - ListedForcesTest (Failed)
- 6 - CommandLineUnitTests (Failed)
- 7 - DomDecTests (Failed)
- 8 - EwaldUnitTests (Failed)
- 9 - FFTUnitTests (Failed)
- 10 - HardwareUnitTests (Failed)
- 11 - MathUnitTests (Failed)
- 12 - MdrunUtilityUnitTests (Failed)
- 14 - OnlineHelpUnitTests (Failed)
- 15 - OptionsUnitTests (Failed)
- 16 - RandomUnitTests (Failed)
- 17 - RestraintTests (Failed)
- 18 - TableUnitTests (Failed)
- 19 - TaskAssignmentUnitTests (Failed)
- 20 - UtilityUnitTests (Failed)
- 22 - FileIOTests (Failed)
- 23 - PullTest (Failed)
- 24 - AwhTest (Failed)
- 25 - SimdUnitTests (Failed)
- 26 - CompatibilityHelpersTests (Failed)
- 27 - GmxAnaTest (Failed)
- 28 - GmxPreprocessTests (Failed)
- 29 - Pdb2gmxTest (Failed)
- 30 - CorrelationsTest (Failed)
- 31 - AnalysisDataUnitTests (Failed)
- 32 - SelectionUnitTests (Failed)
- 33 - TrajectoryAnalysisUnitTests (Failed)
- 34 - EnergyAnalysisUnitTests (Failed)
- 35 - ToolUnitTests (Failed)
- 36 - MdrunTests (Failed)
- 37 - MdrunNonIntegratorTests (Failed)
- 38 - LegacyGroupSchemeMdrunTests (Failed)
- 39 - MdrunMpiTests (Failed)
- Errors while running CTest
- make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
- make[3]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
- make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
- make[2]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
- make[1]: *** [CMakeFiles/check.dir/rule] Error 2
- make[1]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
- make: *** [check] Error 2
- (at easybuild/tools/run.py:501 in parse_cmd_output)
- == 2019-01-19 20:54:30,817 easyblock.py:2870 WARNING build failed (first 300 chars): cmd "make check -j 36 " exited with exit code 2 and output:
- /phanpy/scratch/gent/vo/000/gvo00002/vsc40023/easybuild_REGTEST/CO7/skylake-ib/software/CMake/3.11.4-GCCcore-7.3.0/bin/cmake -H/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/gromacs-2019 -B/tmp/vsc40023/easybuild_build/GROMACS/2019/f
- == 2019-01-19 20:54:30,817 easyblock.py:288 INFO Closing log for application name GROMACS version 2019
Add Comment
Please, Sign In to add comment