Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Debian Jessie, kernel 4.9.37, 825 MHz ddr_freq
- root@odroidxu4:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 1199.7 MB/s (0.2%)
- C copy backwards (32 byte blocks) : 1208.0 MB/s (0.3%)
- C copy backwards (64 byte blocks) : 2416.7 MB/s (3.2%)
- C copy : 1234.1 MB/s
- C copy prefetched (32 bytes step) : 1471.8 MB/s (0.5%)
- C copy prefetched (64 bytes step) : 1470.2 MB/s (0.6%)
- C 2-pass copy : 1159.8 MB/s
- C 2-pass copy prefetched (32 bytes step) : 1425.0 MB/s (2.0%)
- C 2-pass copy prefetched (64 bytes step) : 1428.3 MB/s (0.2%)
- C fill : 4865.4 MB/s (0.4%)
- C fill (shuffle within 16 byte blocks) : 1830.4 MB/s (0.2%)
- C fill (shuffle within 32 byte blocks) : 1829.4 MB/s (2.7%)
- C fill (shuffle within 64 byte blocks) : 1912.0 MB/s
- ---
- standard memcpy : 2299.6 MB/s (0.6%)
- standard memset : 4891.3 MB/s (1.2%)
- ---
- NEON read : 3387.1 MB/s (0.1%)
- NEON read prefetched (32 bytes step) : 4285.0 MB/s (4.3%)
- NEON read prefetched (64 bytes step) : 4297.3 MB/s
- NEON read 2 data streams : 3497.1 MB/s (0.3%)
- NEON read 2 data streams prefetched (32 bytes step) : 4482.2 MB/s (0.4%)
- NEON read 2 data streams prefetched (64 bytes step) : 4488.3 MB/s
- NEON copy : 2599.5 MB/s (3.1%)
- NEON copy prefetched (32 bytes step) : 2926.4 MB/s
- NEON copy prefetched (64 bytes step) : 2920.4 MB/s (0.6%)
- NEON unrolled copy : 2263.8 MB/s (0.9%)
- NEON unrolled copy prefetched (32 bytes step) : 3242.2 MB/s (2.6%)
- NEON unrolled copy prefetched (64 bytes step) : 3263.9 MB/s (1.1%)
- NEON copy backwards : 1224.7 MB/s (0.3%)
- NEON copy backwards prefetched (32 bytes step) : 1436.9 MB/s (0.3%)
- NEON copy backwards prefetched (64 bytes step) : 1435.4 MB/s
- NEON 2-pass copy : 2074.9 MB/s (3.5%)
- NEON 2-pass copy prefetched (32 bytes step) : 2250.0 MB/s
- NEON 2-pass copy prefetched (64 bytes step) : 2249.9 MB/s (0.4%)
- NEON unrolled 2-pass copy : 1390.9 MB/s (0.7%)
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 1720.8 MB/s (0.4%)
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 1733.9 MB/s
- NEON fill : 4894.5 MB/s (1.1%)
- NEON fill backwards : 1839.1 MB/s
- VFP copy : 2481.4 MB/s (0.6%)
- VFP 2-pass copy : 1324.4 MB/s (0.3%)
- ARM fill (STRD) : 4892.5 MB/s (1.2%)
- ARM fill (STM with 8 registers) : 4870.3 MB/s (0.7%)
- ARM fill (STM with 4 registers) : 4897.5 MB/s (0.4%)
- ARM copy prefetched (incr pld) : 2945.4 MB/s (0.2%)
- ARM copy prefetched (wrap pld) : 2776.4 MB/s (3.0%)
- ARM 2-pass copy prefetched (incr pld) : 1638.2 MB/s (0.4%)
- ARM 2-pass copy prefetched (wrap pld) : 1616.3 MB/s
- ==========================================================================
- == Framebuffer read tests. ==
- == ==
- == Many ARM devices use a part of the system memory as the framebuffer, ==
- == typically mapped as uncached but with write-combining enabled. ==
- == Writes to such framebuffers are quite fast, but reads are much ==
- == slower and very sensitive to the alignment and the selection of ==
- == CPU instructions which are used for accessing memory. ==
- == ==
- == Many x86 systems allocate the framebuffer in the GPU memory, ==
- == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
- == PCI-E is asymmetric and handles reads a lot worse than writes. ==
- == ==
- == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
- == or preferably >300 MB/s), then using the shadow framebuffer layer ==
- == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
- == performance improvement. For example, the xf86-video-fbturbo DDX ==
- == uses this trick. ==
- ==========================================================================
- NEON read (from framebuffer) : 12259.8 MB/s (0.2%)
- NEON copy (from framebuffer) : 7046.1 MB/s (0.8%)
- NEON 2-pass copy (from framebuffer) : 4664.6 MB/s (0.3%)
- NEON unrolled copy (from framebuffer) : 5722.7 MB/s (0.3%)
- NEON 2-pass unrolled copy (from framebuffer) : 3815.3 MB/s
- VFP copy (from framebuffer) : 5724.9 MB/s
- VFP 2-pass copy (from framebuffer) : 3521.2 MB/s
- ARM copy (from framebuffer) : 7590.9 MB/s (0.1%)
- ARM 2-pass copy (from framebuffer) : 3813.6 MB/s (0.6%)
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.1 ns
- 2048 : 0.0 ns / 0.0 ns
- 4096 : 0.0 ns / 0.1 ns
- 8192 : 0.0 ns / 0.1 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.1 ns
- 65536 : 4.4 ns / 6.8 ns
- 131072 : 6.7 ns / 9.0 ns
- 262144 : 9.6 ns / 11.9 ns
- 524288 : 11.0 ns / 13.6 ns
- 1048576 : 12.0 ns / 14.6 ns
- 2097152 : 20.9 ns / 29.0 ns
- 4194304 : 96.1 ns / 145.2 ns
- 8388608 : 134.7 ns / 184.3 ns
- 16777216 : 154.8 ns / 201.4 ns
- 33554432 : 171.2 ns / 220.1 ns
- 67108864 : 181.0 ns / 232.4 ns
- Debian Jessie, kernel 4.9.37, 933 MHz ddr_freq
- root@odroidxu4:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 1239.7 MB/s
- C copy backwards (32 byte blocks) : 1248.5 MB/s
- C copy backwards (64 byte blocks) : 2470.7 MB/s (2.4%)
- C copy : 1272.5 MB/s
- C copy prefetched (32 bytes step) : 1532.0 MB/s (0.3%)
- C copy prefetched (64 bytes step) : 1529.5 MB/s
- C 2-pass copy : 1185.6 MB/s (0.2%)
- C 2-pass copy prefetched (32 bytes step) : 1467.7 MB/s (2.6%)
- C 2-pass copy prefetched (64 bytes step) : 1469.1 MB/s (0.3%)
- C fill : 5469.8 MB/s (1.3%)
- C fill (shuffle within 16 byte blocks) : 1933.3 MB/s (0.2%)
- C fill (shuffle within 32 byte blocks) : 1933.5 MB/s (0.4%)
- C fill (shuffle within 64 byte blocks) : 2036.7 MB/s (2.8%)
- ---
- standard memcpy : 2469.9 MB/s (0.6%)
- standard memset : 5463.3 MB/s (0.8%)
- ---
- NEON read : 3600.0 MB/s (0.4%)
- NEON read prefetched (32 bytes step) : 4624.0 MB/s (2.5%)
- NEON read prefetched (64 bytes step) : 4635.5 MB/s (1.1%)
- NEON read 2 data streams : 3726.9 MB/s (0.3%)
- NEON read 2 data streams prefetched (32 bytes step) : 4813.4 MB/s (0.4%)
- NEON read 2 data streams prefetched (64 bytes step) : 4822.3 MB/s (2.8%)
- NEON copy : 2903.0 MB/s
- NEON copy prefetched (32 bytes step) : 3281.4 MB/s (0.6%)
- NEON copy prefetched (64 bytes step) : 3277.3 MB/s (0.6%)
- NEON unrolled copy : 2526.1 MB/s (2.3%)
- NEON unrolled copy prefetched (32 bytes step) : 3613.4 MB/s (0.2%)
- NEON unrolled copy prefetched (64 bytes step) : 3636.0 MB/s (2.7%)
- NEON copy backwards : 1328.7 MB/s (0.2%)
- NEON copy backwards prefetched (32 bytes step) : 1550.8 MB/s (0.2%)
- NEON copy backwards prefetched (64 bytes step) : 1549.1 MB/s (2.0%)
- NEON 2-pass copy : 2131.3 MB/s
- NEON 2-pass copy prefetched (32 bytes step) : 2426.6 MB/s
- NEON 2-pass copy prefetched (64 bytes step) : 2445.5 MB/s (0.3%)
- NEON unrolled 2-pass copy : 1509.4 MB/s (0.2%)
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 1865.9 MB/s (2.6%)
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 1882.5 MB/s (1.1%)
- NEON fill : 5462.2 MB/s (1.1%)
- NEON fill backwards : 1962.6 MB/s
- VFP copy : 2887.6 MB/s (0.6%)
- VFP 2-pass copy : 1467.9 MB/s (1.9%)
- ARM fill (STRD) : 5425.2 MB/s (0.4%)
- ARM fill (STM with 8 registers) : 5448.5 MB/s (0.5%)
- ARM fill (STM with 4 registers) : 5462.4 MB/s (0.4%)
- ARM copy prefetched (incr pld) : 3327.0 MB/s (3.5%)
- ARM copy prefetched (wrap pld) : 3167.3 MB/s (0.6%)
- ARM 2-pass copy prefetched (incr pld) : 1796.8 MB/s (0.9%)
- ARM 2-pass copy prefetched (wrap pld) : 1766.6 MB/s
- ==========================================================================
- == Framebuffer read tests. ==
- == ==
- == Many ARM devices use a part of the system memory as the framebuffer, ==
- == typically mapped as uncached but with write-combining enabled. ==
- == Writes to such framebuffers are quite fast, but reads are much ==
- == slower and very sensitive to the alignment and the selection of ==
- == CPU instructions which are used for accessing memory. ==
- == ==
- == Many x86 systems allocate the framebuffer in the GPU memory, ==
- == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
- == PCI-E is asymmetric and handles reads a lot worse than writes. ==
- == ==
- == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
- == or preferably >300 MB/s), then using the shadow framebuffer layer ==
- == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
- == performance improvement. For example, the xf86-video-fbturbo DDX ==
- == uses this trick. ==
- ==========================================================================
- NEON read (from framebuffer) : 12205.4 MB/s
- NEON copy (from framebuffer) : 7045.1 MB/s (0.4%)
- NEON 2-pass copy (from framebuffer) : 4654.7 MB/s (1.7%)
- NEON unrolled copy (from framebuffer) : 5721.0 MB/s
- NEON 2-pass unrolled copy (from framebuffer) : 3818.8 MB/s
- VFP copy (from framebuffer) : 5733.0 MB/s
- VFP 2-pass copy (from framebuffer) : 3524.9 MB/s (0.2%)
- ARM copy (from framebuffer) : 7588.9 MB/s (0.1%)
- ARM 2-pass copy (from framebuffer) : 3787.2 MB/s
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.1 ns
- 2048 : 0.0 ns / 0.1 ns
- 4096 : 0.0 ns / 0.1 ns
- 8192 : 0.0 ns / 0.1 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.1 ns
- 65536 : 4.4 ns / 6.9 ns
- 131072 : 6.7 ns / 9.0 ns
- 262144 : 9.6 ns / 12.0 ns
- 524288 : 11.0 ns / 13.7 ns
- 1048576 : 12.0 ns / 14.7 ns
- 2097152 : 20.2 ns / 28.6 ns
- 4194304 : 89.5 ns / 135.1 ns
- 8388608 : 125.3 ns / 172.1 ns
- 16777216 : 143.6 ns / 187.8 ns
- 33554432 : 158.0 ns / 210.4 ns
- 67108864 : 166.9 ns / 222.6 ns
- Official Ubuntu Xenial Hardkernel image, 4.9.28, no ddr_freq changes:
- root@odroid:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 1178.8 MB/s
- C copy backwards (32 byte blocks) : 1165.0 MB/s
- C copy backwards (64 byte blocks) : 2397.9 MB/s
- C copy : 2574.3 MB/s
- C copy prefetched (32 bytes step) : 2847.8 MB/s
- C copy prefetched (64 bytes step) : 2916.8 MB/s
- C 2-pass copy : 1356.8 MB/s
- C 2-pass copy prefetched (32 bytes step) : 1619.3 MB/s
- C 2-pass copy prefetched (64 bytes step) : 1633.6 MB/s
- C fill : 4935.2 MB/s (1.0%)
- C fill (shuffle within 16 byte blocks) : 1845.1 MB/s
- C fill (shuffle within 32 byte blocks) : 1844.7 MB/s
- C fill (shuffle within 64 byte blocks) : 1927.9 MB/s
- ---
- standard memcpy : 2316.3 MB/s
- standard memset : 4950.1 MB/s (1.1%)
- ---
- NEON read : 3370.8 MB/s
- NEON read prefetched (32 bytes step) : 4294.9 MB/s
- NEON read prefetched (64 bytes step) : 4304.6 MB/s
- NEON read 2 data streams : 3485.0 MB/s
- NEON read 2 data streams prefetched (32 bytes step) : 4478.6 MB/s
- NEON read 2 data streams prefetched (64 bytes step) : 4487.0 MB/s
- NEON copy : 2652.3 MB/s
- NEON copy prefetched (32 bytes step) : 2953.8 MB/s (0.2%)
- NEON copy prefetched (64 bytes step) : 2942.8 MB/s
- NEON unrolled copy : 2250.6 MB/s
- NEON unrolled copy prefetched (32 bytes step) : 3280.6 MB/s
- NEON unrolled copy prefetched (64 bytes step) : 3302.6 MB/s
- NEON copy backwards : 1226.8 MB/s
- NEON copy backwards prefetched (32 bytes step) : 1441.2 MB/s
- NEON copy backwards prefetched (64 bytes step) : 1440.6 MB/s
- NEON 2-pass copy : 2111.7 MB/s
- NEON 2-pass copy prefetched (32 bytes step) : 2251.2 MB/s
- NEON 2-pass copy prefetched (64 bytes step) : 2252.0 MB/s
- NEON unrolled 2-pass copy : 1392.8 MB/s
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 1737.1 MB/s
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 1752.6 MB/s
- NEON fill : 4927.3 MB/s (0.8%)
- NEON fill backwards : 1857.4 MB/s
- VFP copy : 2485.4 MB/s
- VFP 2-pass copy : 1328.5 MB/s
- ARM fill (STRD) : 4930.7 MB/s (1.0%)
- ARM fill (STM with 8 registers) : 4914.0 MB/s
- ARM fill (STM with 4 registers) : 4941.0 MB/s (0.2%)
- ARM copy prefetched (incr pld) : 3177.5 MB/s
- ARM copy prefetched (wrap pld) : 2989.0 MB/s
- ARM 2-pass copy prefetched (incr pld) : 1700.1 MB/s
- ARM 2-pass copy prefetched (wrap pld) : 1670.9 MB/s
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.0 ns
- 2048 : 0.0 ns / 0.0 ns
- 4096 : 0.0 ns / 0.0 ns
- 8192 : 0.0 ns / 0.0 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.0 ns
- 65536 : 4.4 ns / 6.8 ns
- 131072 : 6.7 ns / 9.0 ns
- 262144 : 9.6 ns / 11.9 ns
- 524288 : 11.0 ns / 13.6 ns
- 1048576 : 11.9 ns / 14.5 ns
- 2097152 : 19.3 ns / 27.6 ns
- 4194304 : 95.0 ns / 143.0 ns
- 8388608 : 133.6 ns / 181.7 ns
- 16777216 : 153.2 ns / 196.7 ns
- 33554432 : 168.5 ns / 216.5 ns
- 67108864 : 177.9 ns / 231.5 ns
Add Comment
Please, Sign In to add comment