Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Git version control system version 1.8.4-rc1 loaded.
- VampirTrace open-source instrumentation library (GNU) version 5.14.4 loaded.
- cat: tau_worst.moab: No such file or directory
- + export MPICH_NEMESIS_ASYNC_PROGRESS=SC
- + MPICH_NEMESIS_ASYNC_PROGRESS=SC
- + export MPICH_MAX_THREAD_SAFETY=multiple
- + MPICH_MAX_THREAD_SAFETY=multiple
- + export TAU_PROFILE_FORMAT=merged
- + TAU_PROFILE_FORMAT=merged
- + echo '\nNode 128 scale 29 with priority queue \n'
- \nNode 128 scale 29 with priority queue \n
- + echo '\n--------------------\n'
- \n--------------------\n
- + aprun -b -n 128 -N 1 -d 15 -r 1 ../performance_test --poll-task 4 --threads 15 --scale 29 --degree 16 --num-sources 8 --coalescing-size 43000 --flush 18 --eager-limit 10 --max-weight 100 --receive-depth 8 --priority_coalescing_size 43000 --with-no-reductions --without-per-thread-reductions --run_dc
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- ....
- ....
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Starting with MPI_THREAD_MULTIPLE.
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- Thread level: 3
- ....
- ....
- Thread level: 3
- Thread level: 3
- Graph generation took 855.81s
- Maximum Degree is 6007462
- Threads: 15 Coalescing: 43000 Poll: 4 Routing: 0 Depth: 8 Priority: 43000 Flush: 18 Eager: 10per_thread_reductions - 0 no_reductions : 1
- Reduction: 0
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- ...
- ...
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- per_thread_reductions - 0 no_reductions : 1
- _pmiu_daemon(SIGCHLD): [NID 00099] [c2-0c2s1n3] [Fri Sep 19 22:43:30 2014] PE RANK 92 exit signal Segmentation fault
- [NID 00099] 2014-09-19 22:43:30 Apid 3826614: initiated application termination
- Application 3826614 exit codes: 139
- Application 3826614 exit signals: Killed
- Application 3826614 resources: utime ~854s, stime ~7s, Rss ~5465904, inblocks ~50716, outblocks ~976
- Job Cleanup ran on Fri Sep 19 22:43:31 EDT 2014
- ===============================
- submit_args : submit_args = tau_best.moab
- NIDS : 96-101,103,114,120-127,136-146,148-151,232-247,258-263,270,273,281-285,288-294,298,309,312,314-319,326-333,338-345,420-443,448-451,457,468-469,471,476-479
- NID Placement: 96/16 97/16 98/16 99/16 100/16 101/16 103/16 114/16 120/16 121/16 122/16 123/16 124/16 125/16 126/16 127/16 136/16 137/16 138/16 139/16 140/16 141/16 142/16 143/16 144/16 145/16 146/16 148/16 149/16 150/16 151/16 232/16 233/16 234/16 235/16 236/16 237/16 238/16 239/16 240/16 241/16 242/16 243/16 244/16 245/16 246/16 247/16 258/16 259/16 260/16 261/16 262/16 263/16 270/16 273/16 281/16 282/16 283/16 284/16 285/16 288/16 289/16 290/16 291/16 292/16 293/16 294/16 298/16 309/16 312/16 314/16 315/16 316/16 317/16 318/16 319/16 326/16 327/16 328/16 329/16 330/16 331/16 332/16 333/16 338/16 339/16 340/16 341/16 342/16 343/16 344/16 345/16 420/16 421/16 422/16 423/16 424/16 425/16 426/16 427/16 428/16 429/16 430/16 431/16 432/16 433/16 434/16 435/16 436/16 437/16 438/16 439/16 440/16 441/16 442/16 443/16 448/16 449/16 450/16 451/16 457/16 468/16 469/16 471/16 476/16 477/16 478/16 479/16
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement