Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- srun --mpi=pmix_v1 -N 2 -n 2 mpi_program
- [headnode:17246] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 206
- [headnode:17246] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 147
- [headnode:17246] PMIX ERROR: UNPACK-PAST-END in file client/pmix_client.c at line 227
- [headnode:17246] OPAL ERROR: Error in file pmix3x_client.c at line 112
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [headnode:17246] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- srun: error: headnode: task 0: Exited with exit code 1
- [nodeA:12838] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 206
- [nodeA:12838] PMIX ERROR: UNPACK-PAST-END in file unpack.c at line 147
- [nodeA:12838] PMIX ERROR: UNPACK-PAST-END in file client/pmix_client.c at line 227
- [nodeA:12838] OPAL ERROR: Error in file pmix3x_client.c at line 112
- --------------------------------------------------------------------------
- The application appears to have been direct launched using "srun",
- but OMPI was not built with SLURM's PMI support and therefore cannot
- execute. There are several options for building PMI support under
- SLURM, depending upon the SLURM version you are using:
- version 16.05 or later: you can use SLURM's PMIx support. This
- requires that you configure and build SLURM --with-pmix.
- Versions earlier than 16.05: you must use either SLURM's PMI-1 or
- PMI-2 support. SLURM builds PMI-1 by default, or you can manually
- install PMI-2. You must then build Open MPI using --with-pmi pointing
- to the SLURM PMI library location.
- Please configure as appropriate and try again.
- --------------------------------------------------------------------------
- *** An error occurred in MPI_Init
- *** on a NULL communicator
- *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
- *** and potentially your MPI job)
- [nodeA:12838] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
- srun: error: nodeA: task 1: Exited with exit code 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement