Guest User

Untitled

a guest
Jan 20th, 2019
100
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 26.54 KB | None | 0 0
  1. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  2. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  3. install PMI-2. You must then build Open MPI using --with-pmi pointing
  4. to the SLURM PMI library location.
  5.  
  6. Please configure as appropriate and try again.
  7. --------------------------------------------------------------------------
  8. *** An error occurred in MPI_Init_thread
  9. *** on a NULL communicator
  10. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  11. *** and potentially your MPI job)
  12. [node3102.skitty.os:357106] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  13.  
  14. Start 23: PullTest
  15. 23/39 Test #23: PullTest .........................***Failed 0.02 sec
  16. [node3102.skitty.os:357108] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  17. --------------------------------------------------------------------------
  18. The application appears to have been direct launched using "srun",
  19. but OMPI was not built with SLURM's PMI support and therefore cannot
  20. execute. There are several options for building PMI support under
  21. SLURM, depending upon the SLURM version you are using:
  22.  
  23. version 16.05 or later: you can use SLURM's PMIx support. This
  24. requires that you configure and build SLURM --with-pmix.
  25.  
  26. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  27. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  28. install PMI-2. You must then build Open MPI using --with-pmi pointing
  29. to the SLURM PMI library location.
  30.  
  31. Please configure as appropriate and try again.
  32. --------------------------------------------------------------------------
  33. *** An error occurred in MPI_Init_thread
  34. *** on a NULL communicator
  35. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  36. *** and potentially your MPI job)
  37. [node3102.skitty.os:357108] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  38.  
  39. Start 24: AwhTest
  40. 24/39 Test #24: AwhTest ..........................***Failed 0.02 sec
  41. [node3102.skitty.os:357110] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  42. --------------------------------------------------------------------------
  43. The application appears to have been direct launched using "srun",
  44. but OMPI was not built with SLURM's PMI support and therefore cannot
  45. execute. There are several options for building PMI support under
  46. SLURM, depending upon the SLURM version you are using:
  47.  
  48. version 16.05 or later: you can use SLURM's PMIx support. This
  49. requires that you configure and build SLURM --with-pmix.
  50.  
  51. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  52. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  53. install PMI-2. You must then build Open MPI using --with-pmi pointing
  54. to the SLURM PMI library location.
  55.  
  56. Please configure as appropriate and try again.
  57. --------------------------------------------------------------------------
  58. *** An error occurred in MPI_Init_thread
  59. *** on a NULL communicator
  60. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  61. *** and potentially your MPI job)
  62. [node3102.skitty.os:357110] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  63.  
  64. Start 25: SimdUnitTests
  65. 25/39 Test #25: SimdUnitTests ....................***Failed 0.02 sec
  66. [node3102.skitty.os:357112] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  67. --------------------------------------------------------------------------
  68. The application appears to have been direct launched using "srun",
  69. but OMPI was not built with SLURM's PMI support and therefore cannot
  70. execute. There are several options for building PMI support under
  71. SLURM, depending upon the SLURM version you are using:
  72.  
  73. version 16.05 or later: you can use SLURM's PMIx support. This
  74. requires that you configure and build SLURM --with-pmix.
  75.  
  76. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  77. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  78. install PMI-2. You must then build Open MPI using --with-pmi pointing
  79. to the SLURM PMI library location.
  80.  
  81. Please configure as appropriate and try again.
  82. --------------------------------------------------------------------------
  83. *** An error occurred in MPI_Init_thread
  84. *** on a NULL communicator
  85. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  86. *** and potentially your MPI job)
  87. [node3102.skitty.os:357112] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  88.  
  89. Start 26: CompatibilityHelpersTests
  90. 26/39 Test #26: CompatibilityHelpersTests ........***Failed 0.02 sec
  91. [node3102.skitty.os:357114] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  92. --------------------------------------------------------------------------
  93. The application appears to have been direct launched using "srun",
  94. but OMPI was not built with SLURM's PMI support and therefore cannot
  95. execute. There are several options for building PMI support under
  96. SLURM, depending upon the SLURM version you are using:
  97.  
  98. version 16.05 or later: you can use SLURM's PMIx support. This
  99. requires that you configure and build SLURM --with-pmix.
  100.  
  101. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  102. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  103. install PMI-2. You must then build Open MPI using --with-pmi pointing
  104. to the SLURM PMI library location.
  105.  
  106. Please configure as appropriate and try again.
  107. --------------------------------------------------------------------------
  108. *** An error occurred in MPI_Init_thread
  109. *** on a NULL communicator
  110. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  111. *** and potentially your MPI job)
  112. [node3102.skitty.os:357114] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  113.  
  114. Start 27: GmxAnaTest
  115. 27/39 Test #27: GmxAnaTest .......................***Failed 0.02 sec
  116. [node3102.skitty.os:357116] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  117. --------------------------------------------------------------------------
  118. The application appears to have been direct launched using "srun",
  119. but OMPI was not built with SLURM's PMI support and therefore cannot
  120. execute. There are several options for building PMI support under
  121. SLURM, depending upon the SLURM version you are using:
  122.  
  123. version 16.05 or later: you can use SLURM's PMIx support. This
  124. requires that you configure and build SLURM --with-pmix.
  125.  
  126. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  127. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  128. install PMI-2. You must then build Open MPI using --with-pmi pointing
  129. to the SLURM PMI library location.
  130.  
  131. Please configure as appropriate and try again.
  132. --------------------------------------------------------------------------
  133. *** An error occurred in MPI_Init_thread
  134. *** on a NULL communicator
  135. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  136. *** and potentially your MPI job)
  137. [node3102.skitty.os:357116] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  138.  
  139. Start 28: GmxPreprocessTests
  140. 28/39 Test #28: GmxPreprocessTests ...............***Failed 0.02 sec
  141. [node3102.skitty.os:357118] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  142. --------------------------------------------------------------------------
  143. The application appears to have been direct launched using "srun",
  144. but OMPI was not built with SLURM's PMI support and therefore cannot
  145. execute. There are several options for building PMI support under
  146. SLURM, depending upon the SLURM version you are using:
  147.  
  148. version 16.05 or later: you can use SLURM's PMIx support. This
  149. requires that you configure and build SLURM --with-pmix.
  150.  
  151. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  152. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  153. install PMI-2. You must then build Open MPI using --with-pmi pointing
  154. to the SLURM PMI library location.
  155.  
  156. Please configure as appropriate and try again.
  157. --------------------------------------------------------------------------
  158. *** An error occurred in MPI_Init_thread
  159. *** on a NULL communicator
  160. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  161. *** and potentially your MPI job)
  162. [node3102.skitty.os:357118] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  163.  
  164. Start 29: Pdb2gmxTest
  165. 29/39 Test #29: Pdb2gmxTest ......................***Failed 0.02 sec
  166. [node3102.skitty.os:357120] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  167. --------------------------------------------------------------------------
  168. The application appears to have been direct launched using "srun",
  169. but OMPI was not built with SLURM's PMI support and therefore cannot
  170. execute. There are several options for building PMI support under
  171. SLURM, depending upon the SLURM version you are using:
  172.  
  173. version 16.05 or later: you can use SLURM's PMIx support. This
  174. requires that you configure and build SLURM --with-pmix.
  175.  
  176. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  177. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  178. install PMI-2. You must then build Open MPI using --with-pmi pointing
  179. to the SLURM PMI library location.
  180.  
  181. Please configure as appropriate and try again.
  182. --------------------------------------------------------------------------
  183. *** An error occurred in MPI_Init_thread
  184. *** on a NULL communicator
  185. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  186. *** and potentially your MPI job)
  187. [node3102.skitty.os:357120] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  188.  
  189. Start 30: CorrelationsTest
  190. 30/39 Test #30: CorrelationsTest .................***Failed 0.02 sec
  191. [node3102.skitty.os:357122] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  192. --------------------------------------------------------------------------
  193. The application appears to have been direct launched using "srun",
  194. but OMPI was not built with SLURM's PMI support and therefore cannot
  195. execute. There are several options for building PMI support under
  196. SLURM, depending upon the SLURM version you are using:
  197.  
  198. version 16.05 or later: you can use SLURM's PMIx support. This
  199. requires that you configure and build SLURM --with-pmix.
  200.  
  201. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  202. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  203. install PMI-2. You must then build Open MPI using --with-pmi pointing
  204. to the SLURM PMI library location.
  205.  
  206. Please configure as appropriate and try again.
  207. --------------------------------------------------------------------------
  208. *** An error occurred in MPI_Init_thread
  209. *** on a NULL communicator
  210. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  211. *** and potentially your MPI job)
  212. [node3102.skitty.os:357122] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  213.  
  214. Start 31: AnalysisDataUnitTests
  215. 31/39 Test #31: AnalysisDataUnitTests ............***Failed 0.02 sec
  216. [node3102.skitty.os:357124] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  217. --------------------------------------------------------------------------
  218. The application appears to have been direct launched using "srun",
  219. but OMPI was not built with SLURM's PMI support and therefore cannot
  220. execute. There are several options for building PMI support under
  221. SLURM, depending upon the SLURM version you are using:
  222.  
  223. version 16.05 or later: you can use SLURM's PMIx support. This
  224. requires that you configure and build SLURM --with-pmix.
  225.  
  226. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  227. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  228. install PMI-2. You must then build Open MPI using --with-pmi pointing
  229. to the SLURM PMI library location.
  230.  
  231. Please configure as appropriate and try again.
  232. --------------------------------------------------------------------------
  233. *** An error occurred in MPI_Init_thread
  234. *** on a NULL communicator
  235. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  236. *** and potentially your MPI job)
  237. [node3102.skitty.os:357124] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  238.  
  239. Start 32: SelectionUnitTests
  240. 32/39 Test #32: SelectionUnitTests ...............***Failed 0.02 sec
  241. [node3102.skitty.os:357126] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  242. --------------------------------------------------------------------------
  243. The application appears to have been direct launched using "srun",
  244. but OMPI was not built with SLURM's PMI support and therefore cannot
  245. execute. There are several options for building PMI support under
  246. SLURM, depending upon the SLURM version you are using:
  247.  
  248. version 16.05 or later: you can use SLURM's PMIx support. This
  249. requires that you configure and build SLURM --with-pmix.
  250.  
  251. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  252. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  253. install PMI-2. You must then build Open MPI using --with-pmi pointing
  254. to the SLURM PMI library location.
  255.  
  256. Please configure as appropriate and try again.
  257. --------------------------------------------------------------------------
  258. *** An error occurred in MPI_Init_thread
  259. *** on a NULL communicator
  260. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  261. *** and potentially your MPI job)
  262. [node3102.skitty.os:357126] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  263.  
  264. Start 33: TrajectoryAnalysisUnitTests
  265. 33/39 Test #33: TrajectoryAnalysisUnitTests ......***Failed 0.02 sec
  266. [node3102.skitty.os:357128] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  267. --------------------------------------------------------------------------
  268. The application appears to have been direct launched using "srun",
  269. but OMPI was not built with SLURM's PMI support and therefore cannot
  270. execute. There are several options for building PMI support under
  271. SLURM, depending upon the SLURM version you are using:
  272.  
  273. version 16.05 or later: you can use SLURM's PMIx support. This
  274. requires that you configure and build SLURM --with-pmix.
  275.  
  276. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  277. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  278. install PMI-2. You must then build Open MPI using --with-pmi pointing
  279. to the SLURM PMI library location.
  280.  
  281. Please configure as appropriate and try again.
  282. --------------------------------------------------------------------------
  283. *** An error occurred in MPI_Init_thread
  284. *** on a NULL communicator
  285. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  286. *** and potentially your MPI job)
  287. [node3102.skitty.os:357128] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  288.  
  289. Start 34: EnergyAnalysisUnitTests
  290. 34/39 Test #34: EnergyAnalysisUnitTests ..........***Failed 0.02 sec
  291. [node3102.skitty.os:357130] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  292. --------------------------------------------------------------------------
  293. The application appears to have been direct launched using "srun",
  294. but OMPI was not built with SLURM's PMI support and therefore cannot
  295. execute. There are several options for building PMI support under
  296. SLURM, depending upon the SLURM version you are using:
  297.  
  298. version 16.05 or later: you can use SLURM's PMIx support. This
  299. requires that you configure and build SLURM --with-pmix.
  300.  
  301. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  302. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  303. install PMI-2. You must then build Open MPI using --with-pmi pointing
  304. to the SLURM PMI library location.
  305.  
  306. Please configure as appropriate and try again.
  307. --------------------------------------------------------------------------
  308. *** An error occurred in MPI_Init_thread
  309. *** on a NULL communicator
  310. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  311. *** and potentially your MPI job)
  312. [node3102.skitty.os:357130] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  313.  
  314. Start 35: ToolUnitTests
  315. 35/39 Test #35: ToolUnitTests ....................***Failed 0.02 sec
  316. [node3102.skitty.os:357132] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  317. --------------------------------------------------------------------------
  318. The application appears to have been direct launched using "srun",
  319. but OMPI was not built with SLURM's PMI support and therefore cannot
  320. execute. There are several options for building PMI support under
  321. SLURM, depending upon the SLURM version you are using:
  322.  
  323. version 16.05 or later: you can use SLURM's PMIx support. This
  324. requires that you configure and build SLURM --with-pmix.
  325.  
  326. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  327. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  328. install PMI-2. You must then build Open MPI using --with-pmi pointing
  329. to the SLURM PMI library location.
  330.  
  331. Please configure as appropriate and try again.
  332. --------------------------------------------------------------------------
  333. *** An error occurred in MPI_Init_thread
  334. *** on a NULL communicator
  335. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  336. *** and potentially your MPI job)
  337. [node3102.skitty.os:357132] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  338.  
  339. Start 36: MdrunTests
  340. 36/39 Test #36: MdrunTests .......................***Failed 0.03 sec
  341. [node3102.skitty.os:357134] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  342. --------------------------------------------------------------------------
  343. The application appears to have been direct launched using "srun",
  344. but OMPI was not built with SLURM's PMI support and therefore cannot
  345. execute. There are several options for building PMI support under
  346. SLURM, depending upon the SLURM version you are using:
  347.  
  348. version 16.05 or later: you can use SLURM's PMIx support. This
  349. requires that you configure and build SLURM --with-pmix.
  350.  
  351. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  352. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  353. install PMI-2. You must then build Open MPI using --with-pmi pointing
  354. to the SLURM PMI library location.
  355.  
  356. Please configure as appropriate and try again.
  357. --------------------------------------------------------------------------
  358. *** An error occurred in MPI_Init_thread
  359. *** on a NULL communicator
  360. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  361. *** and potentially your MPI job)
  362. [node3102.skitty.os:357134] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  363.  
  364. Start 37: MdrunNonIntegratorTests
  365. 37/39 Test #37: MdrunNonIntegratorTests ..........***Failed 0.03 sec
  366. [node3102.skitty.os:357136] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  367. --------------------------------------------------------------------------
  368. The application appears to have been direct launched using "srun",
  369. but OMPI was not built with SLURM's PMI support and therefore cannot
  370. execute. There are several options for building PMI support under
  371. SLURM, depending upon the SLURM version you are using:
  372.  
  373. version 16.05 or later: you can use SLURM's PMIx support. This
  374. requires that you configure and build SLURM --with-pmix.
  375.  
  376. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  377. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  378. install PMI-2. You must then build Open MPI using --with-pmi pointing
  379. to the SLURM PMI library location.
  380.  
  381. Please configure as appropriate and try again.
  382. --------------------------------------------------------------------------
  383. *** An error occurred in MPI_Init_thread
  384. *** on a NULL communicator
  385. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  386. *** and potentially your MPI job)
  387. [node3102.skitty.os:357136] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  388.  
  389. Start 38: LegacyGroupSchemeMdrunTests
  390. 38/39 Test #38: LegacyGroupSchemeMdrunTests ......***Failed 0.02 sec
  391. [node3102.skitty.os:357138] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  392. --------------------------------------------------------------------------
  393. The application appears to have been direct launched using "srun",
  394. but OMPI was not built with SLURM's PMI support and therefore cannot
  395. execute. There are several options for building PMI support under
  396. SLURM, depending upon the SLURM version you are using:
  397.  
  398. version 16.05 or later: you can use SLURM's PMIx support. This
  399. requires that you configure and build SLURM --with-pmix.
  400.  
  401. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  402. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  403. install PMI-2. You must then build Open MPI using --with-pmi pointing
  404. to the SLURM PMI library location.
  405.  
  406. Please configure as appropriate and try again.
  407. --------------------------------------------------------------------------
  408. *** An error occurred in MPI_Init_thread
  409. *** on a NULL communicator
  410. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  411. *** and potentially your MPI job)
  412. [node3102.skitty.os:357138] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  413.  
  414. Start 39: MdrunMpiTests
  415. 39/39 Test #39: MdrunMpiTests ....................***Failed 0.03 sec
  416. [node3102.skitty.os:357140] OPAL ERROR: Not initialized in file pmix2x_client.c at line 109
  417. --------------------------------------------------------------------------
  418. The application appears to have been direct launched using "srun",
  419. but OMPI was not built with SLURM's PMI support and therefore cannot
  420. execute. There are several options for building PMI support under
  421. SLURM, depending upon the SLURM version you are using:
  422.  
  423. version 16.05 or later: you can use SLURM's PMIx support. This
  424. requires that you configure and build SLURM --with-pmix.
  425.  
  426. Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  427. PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  428. install PMI-2. You must then build Open MPI using --with-pmi pointing
  429. to the SLURM PMI library location.
  430.  
  431. Please configure as appropriate and try again.
  432. --------------------------------------------------------------------------
  433. *** An error occurred in MPI_Init_thread
  434. *** on a NULL communicator
  435. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
  436. *** and potentially your MPI job)
  437. [node3102.skitty.os:357140] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
  438.  
  439.  
  440. 8% tests passed, 36 tests failed out of 39
  441.  
  442. Label Time Summary:
  443. GTest = 1.87 sec*proc (39 tests)
  444. IntegrationTest = 0.13 sec*proc (5 tests)
  445. MpiTest = 1.03 sec*proc (3 tests)
  446. SlowTest = 0.02 sec*proc (1 test)
  447. UnitTest = 1.72 sec*proc (33 tests)
  448.  
  449. Total Test time (real) = 1.89 sec
  450.  
  451. The following tests FAILED:
  452. 1 - TestUtilsUnitTests (Failed)
  453. 3 - MdlibUnitTest (Failed)
  454. 4 - AppliedForcesUnitTest (Failed)
  455. 5 - ListedForcesTest (Failed)
  456. 6 - CommandLineUnitTests (Failed)
  457. 7 - DomDecTests (Failed)
  458. 8 - EwaldUnitTests (Failed)
  459. 9 - FFTUnitTests (Failed)
  460. 10 - HardwareUnitTests (Failed)
  461. 11 - MathUnitTests (Failed)
  462. 12 - MdrunUtilityUnitTests (Failed)
  463. 14 - OnlineHelpUnitTests (Failed)
  464. 15 - OptionsUnitTests (Failed)
  465. 16 - RandomUnitTests (Failed)
  466. 17 - RestraintTests (Failed)
  467. 18 - TableUnitTests (Failed)
  468. 19 - TaskAssignmentUnitTests (Failed)
  469. 20 - UtilityUnitTests (Failed)
  470. 22 - FileIOTests (Failed)
  471. 23 - PullTest (Failed)
  472. 24 - AwhTest (Failed)
  473. 25 - SimdUnitTests (Failed)
  474. 26 - CompatibilityHelpersTests (Failed)
  475. 27 - GmxAnaTest (Failed)
  476. 28 - GmxPreprocessTests (Failed)
  477. 29 - Pdb2gmxTest (Failed)
  478. 30 - CorrelationsTest (Failed)
  479. 31 - AnalysisDataUnitTests (Failed)
  480. 32 - SelectionUnitTests (Failed)
  481. 33 - TrajectoryAnalysisUnitTests (Failed)
  482. 34 - EnergyAnalysisUnitTests (Failed)
  483. 35 - ToolUnitTests (Failed)
  484. 36 - MdrunTests (Failed)
  485. 37 - MdrunNonIntegratorTests (Failed)
  486. 38 - LegacyGroupSchemeMdrunTests (Failed)
  487. 39 - MdrunMpiTests (Failed)
  488. Errors while running CTest
  489. make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
  490. make[3]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
  491. make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
  492. make[2]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
  493. make[1]: *** [CMakeFiles/check.dir/rule] Error 2
  494. make[1]: Leaving directory `/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/easybuild_obj'
  495. make: *** [check] Error 2
  496. (at easybuild/tools/run.py:501 in parse_cmd_output)
  497. == 2019-01-19 20:54:30,817 easyblock.py:2870 WARNING build failed (first 300 chars): cmd "make check -j 36 " exited with exit code 2 and output:
  498. /phanpy/scratch/gent/vo/000/gvo00002/vsc40023/easybuild_REGTEST/CO7/skylake-ib/software/CMake/3.11.4-GCCcore-7.3.0/bin/cmake -H/tmp/vsc40023/easybuild_build/GROMACS/2019/foss-2018b/gromacs-2019 -B/tmp/vsc40023/easybuild_build/GROMACS/2019/f
  499. == 2019-01-19 20:54:30,817 easyblock.py:288 INFO Closing log for application name GROMACS version 2019
Add Comment
Please, Sign In to add comment