Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 1732.0 MB/s (1.1%)
- C copy backwards (32 byte blocks) : 1715.0 MB/s (0.5%)
- C copy backwards (64 byte blocks) : 1711.0 MB/s (0.5%)
- C copy : 1730.6 MB/s (1.7%)
- C copy prefetched (32 bytes step) : 1507.7 MB/s (0.2%)
- C copy prefetched (64 bytes step) : 1823.1 MB/s (0.3%)
- C 2-pass copy : 1977.7 MB/s (0.2%)
- C 2-pass copy prefetched (32 bytes step) : 1482.3 MB/s (0.2%)
- C 2-pass copy prefetched (64 bytes step) : 1438.5 MB/s (0.2%)
- C fill : 7718.3 MB/s (0.4%)
- C fill (shuffle within 16 byte blocks) : 7717.3 MB/s (0.4%)
- C fill (shuffle within 32 byte blocks) : 7707.1 MB/s (0.4%)
- C fill (shuffle within 64 byte blocks) : 7706.5 MB/s (0.4%)
- ---
- standard memcpy : 1700.1 MB/s (0.2%)
- standard memset : 7726.6 MB/s (0.4%)
- ---
- NEON LDP/STP copy : 1949.6 MB/s (0.3%)
- NEON LDP/STP copy pldl2strm (32 bytes step) : 1581.1 MB/s (0.4%)
- NEON LDP/STP copy pldl2strm (64 bytes step) : 1819.7 MB/s (0.2%)
- NEON LDP/STP copy pldl1keep (32 bytes step) : 2036.0 MB/s (0.2%)
- NEON LDP/STP copy pldl1keep (64 bytes step) : 2049.7 MB/s (0.3%)
- NEON LD1/ST1 copy : 1911.5 MB/s (0.3%)
- NEON STP fill : 7732.4 MB/s (0.4%)
- NEON STNP fill : 2621.4 MB/s (0.6%)
- ARM LDP/STP copy : 1959.8 MB/s (0.2%)
- ARM STP fill : 7729.9 MB/s (0.4%)
- ARM STNP fill : 2626.1 MB/s (0.7%)
- ==========================================================================
- == Framebuffer read tests. ==
- == ==
- == Many ARM devices use a part of the system memory as the framebuffer, ==
- == typically mapped as uncached but with write-combining enabled. ==
- == Writes to such framebuffers are quite fast, but reads are much ==
- == slower and very sensitive to the alignment and the selection of ==
- == CPU instructions which are used for accessing memory. ==
- == ==
- == Many x86 systems allocate the framebuffer in the GPU memory, ==
- == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
- == PCI-E is asymmetric and handles reads a lot worse than writes. ==
- == ==
- == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
- == or preferably >300 MB/s), then using the shadow framebuffer layer ==
- == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
- == performance improvement. For example, the xf86-video-fbturbo DDX ==
- == uses this trick. ==
- ==========================================================================
- NEON LDP/STP copy (from framebuffer) : 306.6 MB/s (0.1%)
- NEON LDP/STP 2-pass copy (from framebuffer) : 360.6 MB/s (0.1%)
- NEON LD1/ST1 copy (from framebuffer) : 96.5 MB/s
- NEON LD1/ST1 2-pass copy (from framebuffer) : 101.5 MB/s
- ARM LDP/STP copy (from framebuffer) : 176.7 MB/s (0.1%)
- ARM LDP/STP 2-pass copy (from framebuffer) : 195.0 MB/s (0.2%)
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.0 ns
- 2048 : 0.0 ns / 0.0 ns
- 4096 : 0.0 ns / 0.0 ns
- 8192 : 0.0 ns / 0.0 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.1 ns / 0.1 ns
- 65536 : 4.9 ns / 8.4 ns
- 131072 : 7.6 ns / 11.7 ns
- 262144 : 10.3 ns / 15.0 ns
- 524288 : 55.8 ns / 87.9 ns
- 1048576 : 83.5 ns / 118.6 ns
- 2097152 : 98.8 ns / 131.6 ns
- 4194304 : 111.1 ns / 142.5 ns
- 8388608 : 118.5 ns / 148.9 ns
- 16777216 : 123.2 ns / 153.1 ns
- 33554432 : 126.8 ns / 156.4 ns
- 67108864 : 138.8 ns / 177.5 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement