Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- root@odroidxu4:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 1184.6 MB/s
- C copy backwards (32 byte blocks) : 1188.5 MB/s (1.0%)
- C copy backwards (64 byte blocks) : 2355.6 MB/s (2.4%)
- C copy : 1216.8 MB/s
- C copy prefetched (32 bytes step) : 1464.4 MB/s (0.9%)
- C copy prefetched (64 bytes step) : 1462.3 MB/s
- C 2-pass copy : 1135.8 MB/s (1.6%)
- C 2-pass copy prefetched (32 bytes step) : 1403.2 MB/s (1.0%)
- C 2-pass copy prefetched (64 bytes step) : 1406.0 MB/s (1.2%)
- C fill : 4906.0 MB/s (1.5%)
- C fill (shuffle within 16 byte blocks) : 1832.9 MB/s (2.4%)
- C fill (shuffle within 32 byte blocks) : 1833.2 MB/s
- C fill (shuffle within 64 byte blocks) : 1914.4 MB/s (1.0%)
- ---
- standard memcpy : 2314.8 MB/s (5.3%)
- standard memset : 4894.1 MB/s (1.7%)
- ---
- NEON read : 3581.8 MB/s (2.9%)
- NEON read prefetched (32 bytes step) : 4461.1 MB/s
- NEON read prefetched (64 bytes step) : 4482.1 MB/s (2.0%)
- NEON read 2 data streams : 3714.8 MB/s (1.6%)
- NEON read 2 data streams prefetched (32 bytes step) : 4600.8 MB/s (4.0%)
- NEON read 2 data streams prefetched (64 bytes step) : 4609.2 MB/s (1.4%)
- NEON copy : 2848.8 MB/s (2.2%)
- NEON copy prefetched (32 bytes step) : 3157.8 MB/s (3.4%)
- NEON copy prefetched (64 bytes step) : 3148.1 MB/s (3.8%)
- NEON unrolled copy : 2359.5 MB/s (2.0%)
- NEON unrolled copy prefetched (32 bytes step) : 3421.0 MB/s (3.0%)
- NEON unrolled copy prefetched (64 bytes step) : 3450.3 MB/s (3.1%)
- NEON copy backwards : 1251.8 MB/s (1.7%)
- NEON copy backwards prefetched (32 bytes step) : 1458.1 MB/s (1.1%)
- NEON copy backwards prefetched (64 bytes step) : 1458.3 MB/s (0.8%)
- NEON 2-pass copy : 2119.7 MB/s (1.9%)
- NEON 2-pass copy prefetched (32 bytes step) : 2354.8 MB/s (2.8%)
- NEON 2-pass copy prefetched (64 bytes step) : 2356.4 MB/s (1.3%)
- NEON unrolled 2-pass copy : 1430.3 MB/s (0.8%)
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 1775.1 MB/s (1.2%)
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 1793.6 MB/s (3.1%)
- NEON fill : 4868.4 MB/s (1.6%)
- NEON fill backwards : 1847.2 MB/s
- VFP copy : 2503.4 MB/s (2.4%)
- VFP 2-pass copy : 1333.5 MB/s (2.6%)
- ARM fill (STRD) : 4886.7 MB/s (1.3%)
- ARM fill (STM with 8 registers) : 4879.1 MB/s (1.4%)
- ARM fill (STM with 4 registers) : 4893.2 MB/s (1.5%)
- ARM copy prefetched (incr pld) : 2969.6 MB/s (3.6%)
- ARM copy prefetched (wrap pld) : 2809.9 MB/s (2.3%)
- ARM 2-pass copy prefetched (incr pld) : 1651.8 MB/s
- ARM 2-pass copy prefetched (wrap pld) : 1630.9 MB/s (1.4%)
- ==========================================================================
- == Framebuffer read tests. ==
- == ==
- == Many ARM devices use a part of the system memory as the framebuffer, ==
- == typically mapped as uncached but with write-combining enabled. ==
- == Writes to such framebuffers are quite fast, but reads are much ==
- == slower and very sensitive to the alignment and the selection of ==
- == CPU instructions which are used for accessing memory. ==
- == ==
- == Many x86 systems allocate the framebuffer in the GPU memory, ==
- == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
- == PCI-E is asymmetric and handles reads a lot worse than writes. ==
- == ==
- == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
- == or preferably >300 MB/s), then using the shadow framebuffer layer ==
- == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
- == performance improvement. For example, the xf86-video-fbturbo DDX ==
- == uses this trick. ==
- ==========================================================================
- NEON read (from framebuffer) : 12209.5 MB/s
- NEON copy (from framebuffer) : 7055.8 MB/s (1.5%)
- NEON 2-pass copy (from framebuffer) : 4603.5 MB/s (1.0%)
- NEON unrolled copy (from framebuffer) : 5726.6 MB/s (0.6%)
- NEON 2-pass unrolled copy (from framebuffer) : 3791.0 MB/s (0.6%)
- VFP copy (from framebuffer) : 5738.3 MB/s
- VFP 2-pass copy (from framebuffer) : 3520.9 MB/s (0.4%)
- ARM copy (from framebuffer) : 7588.4 MB/s (1.3%)
- ARM 2-pass copy (from framebuffer) : 3783.9 MB/s
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.0 ns
- 2048 : 0.0 ns / 0.1 ns
- 4096 : 0.0 ns / 0.1 ns
- 8192 : 0.0 ns / 0.1 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.1 ns
- 65536 : 4.4 ns / 6.8 ns
- 131072 : 6.7 ns / 9.1 ns
- 262144 : 9.6 ns / 12.0 ns
- 524288 : 11.1 ns / 13.6 ns
- 1048576 : 11.9 ns / 14.6 ns
- 2097152 : 19.8 ns / 29.9 ns
- 4194304 : 95.7 ns / 143.9 ns
- 8388608 : 134.3 ns / 182.5 ns
- 16777216 : 153.9 ns / 197.5 ns
- 33554432 : 169.3 ns / 218.2 ns
- 67108864 : 179.0 ns / 235.1 ns
- root@odroidxu4:/usr/local/src/tinymembench# taskset -c 0-3 ./tinymembench
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 218.1 MB/s
- C copy backwards (32 byte blocks) : 278.1 MB/s (2.3%)
- C copy backwards (64 byte blocks) : 299.9 MB/s (4.7%)
- C copy : 288.9 MB/s (3.8%)
- C copy prefetched (32 bytes step) : 536.9 MB/s (4.0%)
- C copy prefetched (64 bytes step) : 688.1 MB/s (10.6%)
- C 2-pass copy : 282.2 MB/s (5.0%)
- C 2-pass copy prefetched (32 bytes step) : 405.6 MB/s (7.2%)
- C 2-pass copy prefetched (64 bytes step) : 423.7 MB/s (7.8%)
- C fill : 801.9 MB/s (9.9%)
- C fill (shuffle within 16 byte blocks) : 803.3 MB/s (10.9%)
- C fill (shuffle within 32 byte blocks) : 485.8 MB/s (0.1%)
- C fill (shuffle within 64 byte blocks) : 486.7 MB/s
- ---
- standard memcpy : 363.4 MB/s (4.0%)
- standard memset : 590.0 MB/s
- ---
- NEON read : 491.7 MB/s (0.2%)
- NEON read prefetched (32 bytes step) : 966.1 MB/s (0.9%)
- NEON read prefetched (64 bytes step) : 1018.3 MB/s (0.4%)
- NEON read 2 data streams : 470.8 MB/s
- NEON read 2 data streams prefetched (32 bytes step) : 965.1 MB/s (1.2%)
- NEON read 2 data streams prefetched (64 bytes step) : 1010.6 MB/s (1.1%)
- NEON copy : 298.8 MB/s (5.0%)
- NEON copy prefetched (32 bytes step) : 704.3 MB/s (8.8%)
- NEON copy prefetched (64 bytes step) : 728.3 MB/s (9.9%)
- NEON unrolled copy : 263.9 MB/s
- NEON unrolled copy prefetched (32 bytes step) : 421.9 MB/s (6.8%)
- NEON unrolled copy prefetched (64 bytes step) : 655.9 MB/s (7.9%)
- NEON copy backwards : 296.8 MB/s (4.5%)
- NEON copy backwards prefetched (32 bytes step) : 699.8 MB/s (9.9%)
- NEON copy backwards prefetched (64 bytes step) : 724.9 MB/s (9.6%)
- NEON 2-pass copy : 291.1 MB/s (4.9%)
- NEON 2-pass copy prefetched (32 bytes step) : 352.5 MB/s
- NEON 2-pass copy prefetched (64 bytes step) : 441.0 MB/s (7.3%)
- NEON unrolled 2-pass copy : 272.1 MB/s (3.9%)
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 364.0 MB/s (5.8%)
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 409.6 MB/s (6.4%)
- NEON fill : 803.4 MB/s (10.7%)
- NEON fill backwards : 803.1 MB/s (9.9%)
- VFP copy : 291.8 MB/s (4.2%)
- VFP 2-pass copy : 265.1 MB/s (3.5%)
- ARM fill (STRD) : 797.1 MB/s (12.0%)
- ARM fill (STM with 8 registers) : 803.4 MB/s (10.9%)
- ARM fill (STM with 4 registers) : 803.1 MB/s (11.0%)
- ARM copy prefetched (incr pld) : 677.5 MB/s (9.3%)
- ARM copy prefetched (wrap pld) : 656.5 MB/s (8.7%)
- ARM 2-pass copy prefetched (incr pld) : 411.3 MB/s (6.0%)
- ARM 2-pass copy prefetched (wrap pld) : 409.7 MB/s (7.5%)
- ==========================================================================
- == Framebuffer read tests. ==
- == ==
- == Many ARM devices use a part of the system memory as the framebuffer, ==
- == typically mapped as uncached but with write-combining enabled. ==
- == Writes to such framebuffers are quite fast, but reads are much ==
- == slower and very sensitive to the alignment and the selection of ==
- == CPU instructions which are used for accessing memory. ==
- == ==
- == Many x86 systems allocate the framebuffer in the GPU memory, ==
- == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
- == PCI-E is asymmetric and handles reads a lot worse than writes. ==
- == ==
- == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
- == or preferably >300 MB/s), then using the shadow framebuffer layer ==
- == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
- == performance improvement. For example, the xf86-video-fbturbo DDX ==
- == uses this trick. ==
- ==========================================================================
- NEON read (from framebuffer) : 3360.0 MB/s
- NEON copy (from framebuffer) : 2239.3 MB/s (6.1%)
- NEON 2-pass copy (from framebuffer) : 1385.1 MB/s (3.9%)
- NEON unrolled copy (from framebuffer) : 1778.4 MB/s (1.4%)
- NEON 2-pass unrolled copy (from framebuffer) : 1037.3 MB/s (1.2%)
- VFP copy (from framebuffer) : 1925.6 MB/s (1.4%)
- VFP 2-pass copy (from framebuffer) : 1108.7 MB/s (1.2%)
- ARM copy (from framebuffer) : 2970.2 MB/s (1.0%)
- ARM 2-pass copy (from framebuffer) : 1438.9 MB/s (1.2%)
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.0 ns
- 2048 : 0.0 ns / 0.0 ns
- 4096 : 0.0 ns / 0.0 ns
- 8192 : 0.0 ns / 0.0 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.0 ns
- 65536 : 4.1 ns / 7.5 ns
- 131072 : 6.4 ns / 10.8 ns
- 262144 : 7.6 ns / 12.3 ns
- 524288 : 10.1 ns / 15.8 ns
- 1048576 : 76.4 ns / 118.3 ns
- 2097152 : 115.2 ns / 155.2 ns
- 4194304 : 135.4 ns / 168.1 ns
- 8388608 : 147.6 ns / 175.9 ns
- 16777216 : 155.2 ns / 182.6 ns
- 33554432 : 163.6 ns / 195.3 ns
- 67108864 : 176.1 ns / 217.5 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement