View difference between Paste ID: nk5K2XRD and e8LPiRF1
SHOW: | | - or go back to the newest paste.
1
[0] DAPL startup(): trying to open default DAPL provider from dat registry: ibnic0v2
2
[1] DAPL startup(): trying to open default DAPL provider from dat registry: ibnic0v2
3
[0] MPI startup(): DAPL provider ibnic0v2
4
[1] MPI startup(): DAPL provider ibnic0v2
5
[0] MPI startup(): dapl data transfer mode
6
[1] MPI startup(): dapl data transfer mode
7
[0] MPI startup(): Internal info: pinning initialization was done
8
[0] MPI startup(): Rank    Pid      Node name  Pin cpu
9
10
[0] MPI startup(): 0       3700     CN01       {0,1,2,3,4,5,6,7}
11
12
[0] MPI startup(): 1       2632     CN02       {0,1,2,3,4,5,6,7}
13
14
[1] MPI startup(): Internal info: pinning initialization was done
15
[0] MPI startup(): I_MPI_DEBUG=5
16
[0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0
17
#---------------------------------------------------
18
#    Intel (R) MPI Benchmark Suite V3.2.3, MPI-1 part    
19
#---------------------------------------------------
20
# Date                  : Thu Jan 17 08:56:05 2013
21
# Machine               : Intel(R) 64 Family 6 Model 26 Stepping 5, GenuineIntel
22
# Release               : 6.1.7601
23
# Version               : Service Pack 1
24
# MPI Version           : 2.2
25
# MPI Thread Environment: MPI_THREAD_MULTIPLE
26
27
28
# New default behavior from Version 3.2 on:
29
30
# the number of iterations per message size is cut down 
31
# dynamically when a certain run time (per message size sample) 
32
# is expected to be exceeded. Time limit is defined by variable 
33
# "SECS_PER_SAMPLE" (=> IMB_settings.h) 
34
# or through the flag => -time 
35
  
36
37
38
# Calling sequence was: 
39
40
# C:\Users\sg\Desktop\imb_3.2.3\WINDOWS\IMB-MPI1_VS_2010\x64\Release\IMB-MPI1.exe
41
42
# Minimum message length in bytes:   0
43
# Maximum message length in bytes:   4194304
44
#
45
# MPI_Datatype                   :   MPI_BYTE 
46
# MPI_Datatype for reductions    :   MPI_FLOAT
47
# MPI_Op                         :   MPI_SUM  
48
#
49
#
50
51
# List of Benchmarks to run:
52
53
# PingPong
54
# PingPing
55
# Sendrecv
56
# Exchange
57
# Allreduce
58
# Reduce
59
# Reduce_scatter
60
# Allgather
61
# Allgatherv
62
# Gather
63
# Gatherv
64
# Scatter
65
# Scatterv
66
# Alltoall
67
# Alltoallv
68
# Bcast
69
# Barrier
70
71
#---------------------------------------------------
72
# Benchmarking PingPong 
73
# #processes = 2 
74
#---------------------------------------------------
75
       #bytes #repetitions      t[usec]   Mbytes/sec
76
            0         1000         3.99         0.00
77
            1         1000         3.99         0.24
78
            2         1000         3.76         0.51
79
            4         1000         3.77         1.01
80
            8         1000         3.78         2.02
81
           16         1000         3.81         4.01
82
           32         1000         3.93         7.77
83
           64         1000         3.93        15.52
84
          128         1000         4.05        30.12
85
          256         1000         4.10        59.57
86
          512         1000         4.41       110.62
87
         1024         1000         4.99       195.63
88
         2048         1000         6.22       314.13
89
         4096         1000         8.30       470.55
90
         8192         1000        10.63       735.28
91
        16384         1000        15.31      1020.76
92
        32768         1000        21.21      1473.49
93
        65536          640        31.53      1982.50
94
       131072          320        52.39      2385.87
95
       262144          160        94.76      2638.22
96
       524288           80       185.22      2699.49
97
      1048576           40       356.92      2801.73
98
      2097152           20       699.45      2859.38
99
      4194304           10      1393.73      2870.00
100
101
102
...
103
104
105
#----------------------------------------------------------------
106
# Benchmarking Bcast 
107
# #processes = 2 
108
#----------------------------------------------------------------
109
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]
110
            0         1000         0.05         0.46         0.26
111
            1         1000         3.41         3.41         3.41
112
            2         1000         3.42         3.42         3.42
113
            4         1000         3.42         3.42         3.42
114
            8         1000         3.42         3.42         3.42
115
           16         1000         3.43         3.44         3.43
116
           32         1000         3.50         3.51         3.50
117
           64         1000         3.55         3.55         3.55
118
          128         1000         3.57         3.58         3.57
119
          256         1000         3.79         3.80         3.80
120
          512         1000         4.08         4.08         4.08
121
         1024         1000         4.74         4.75         4.74
122
         2048         1000         5.89         5.90         5.89
123
         4096         1000         8.12         8.13         8.13
124
         8192         1000        10.31        10.32        10.31
125
        16384         1000        14.74        14.75        14.74
126
        32768         1000        20.05        20.05        20.05
127
128
Fatal error in PMPI_Bcast: Other MPI error, error stack:
129
PMPI_Bcast(2112)........: MPI_Bcast(buf=00000000030C0040, count=65536, MPI_BYTE, root=0, comm=0x84000000) failed
130
MPIR_Bcast_impl(1670)...:
131
I_MPIR_Bcast_intra(1887): Failure during collective
132
MPIR_Bcast_intra(1461)..:
133
MPIR_Bcast_binomial(156): message sizes do not match across processes in the collective
134-
[0:CN01.cual.local] unexpected disconnect completion event from [1:CN02.****.*****]
134+
[0:CN01.****.*****] unexpected disconnect completion event from [1:CN02.****.*****]
135
Assertion failed in file .\dapl_conn_rc.c at line 1128: 0
136
internal ABORT - process 0
137
138
job aborted:
139
rank: node: exit code[: error message]
140-
0: CN01.cual.local: 1: process 0 exited without calling finalize
140+
0: CN01.****.*****: 1: process 0 exited without calling finalize
141
1: CN02: 1: process 1 exited without calling finalize