Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- RK3188 no FB
- tinymembench v0.3.9 (simple benchmark for memory throughput and latency)
- ==========================================================================
- == Memory bandwidth tests ==
- == ==
- == Note 1: 1MB = 1000000 bytes ==
- == Note 2: Results for 'copy' tests show how many bytes can be ==
- == copied per second (adding together read and writen ==
- == bytes would have provided twice higher numbers) ==
- == Note 3: 2-pass copy means that we are using a small temporary buffer ==
- == to first fetch data into it, and only then write it to the ==
- == destination (source -> L1 cache, L1 cache -> destination) ==
- == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
- == brackets ==
- ==========================================================================
- C copy backwards : 382.2 MB/s (1.2%)
- C copy : 533.1 MB/s (1.9%)
- C copy prefetched (32 bytes step) : 401.3 MB/s (1.5%)
- C copy prefetched (64 bytes step) : 401.3 MB/s (1.4%)
- C 2-pass copy : 443.4 MB/s (1.4%)
- C 2-pass copy prefetched (32 bytes step) : 417.7 MB/s (1.4%)
- C 2-pass copy prefetched (64 bytes step) : 417.5 MB/s (1.2%)
- C fill : 1373.1 MB/s (2.3%)
- ---
- standard memcpy : 563.0 MB/s
- standard memset : 1371.7 MB/s (1.4%)
- ---
- NEON read : 1120.1 MB/s (2.2%)
- NEON read prefetched (32 bytes step) : 1282.7 MB/s (1.8%)
- NEON read prefetched (64 bytes step) : 1264.9 MB/s
- NEON read 2 data streams : 1198.8 MB/s (1.4%)
- NEON read 2 data streams prefetched (32 bytes step) : 1187.2 MB/s (1.7%)
- NEON read 2 data streams prefetched (64 bytes step) : 1261.2 MB/s (1.8%)
- NEON copy : 522.1 MB/s (6.5%)
- NEON copy prefetched (32 bytes step) : 576.8 MB/s (1.8%)
- NEON copy prefetched (64 bytes step) : 585.8 MB/s (1.8%)
- NEON unrolled copy : 543.0 MB/s (1.3%)
- NEON unrolled copy prefetched (32 bytes step) : 573.8 MB/s (1.7%)
- NEON unrolled copy prefetched (64 bytes step) : 589.2 MB/s (1.8%)
- NEON copy backwards : 267.6 MB/s (0.6%)
- NEON copy backwards prefetched (32 bytes step) : 450.2 MB/s (1.0%)
- NEON copy backwards prefetched (64 bytes step) : 560.6 MB/s (2.9%)
- NEON 2-pass copy : 564.8 MB/s (1.4%)
- NEON 2-pass copy prefetched (32 bytes step) : 612.8 MB/s (1.3%)
- NEON 2-pass copy prefetched (64 bytes step) : 633.9 MB/s (1.8%)
- NEON unrolled 2-pass copy : 551.0 MB/s (1.3%)
- NEON unrolled 2-pass copy prefetched (32 bytes step) : 609.9 MB/s (1.5%)
- NEON unrolled 2-pass copy prefetched (64 bytes step) : 638.5 MB/s (1.7%)
- NEON fill : 1370.6 MB/s (1.0%)
- NEON fill backwards : 1370.4 MB/s (1.6%)
- VFP copy : 541.8 MB/s (1.5%)
- VFP 2-pass copy : 567.6 MB/s (1.3%)
- ARM fill (STRD) : 1372.1 MB/s (2.0%)
- ARM fill (STM with 8 registers) : 1370.8 MB/s (0.8%)
- ARM fill (STM with 4 registers) : 1371.3 MB/s (1.5%)
- ARM copy prefetched (incr pld) : 570.0 MB/s (1.9%)
- ARM copy prefetched (wrap pld) : 569.3 MB/s (1.8%)
- ARM 2-pass copy prefetched (incr pld) : 620.8 MB/s (1.9%)
- ARM 2-pass copy prefetched (wrap pld) : 614.8 MB/s (1.4%)
- ==========================================================================
- == Memory latency test ==
- == ==
- == Average time is measured for random memory accesses in the buffers ==
- == of different sizes. The larger is the buffer, the more significant ==
- == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
- == accesses. For extremely large buffer sizes we are expecting to see ==
- == page table walk with several requests to SDRAM for almost every ==
- == memory access (though 64MiB is not nearly large enough to experience ==
- == this effect to its fullest). ==
- == ==
- == Note 1: All the numbers are representing extra time, which needs to ==
- == be added to L1 cache latency. The cycle timings for L1 cache ==
- == latency can be usually found in the processor documentation. ==
- == Note 2: Dual random read means that we are simultaneously performing ==
- == two independent memory accesses at a time. In the case if ==
- == the memory subsystem can't handle multiple outstanding ==
- == requests, dual random read has the same timings as two ==
- == single reads performed one after another. ==
- ==========================================================================
- block size : single random read / dual random read
- 1024 : 0.0 ns / 0.0 ns
- 2048 : 0.0 ns / 0.0 ns
- 4096 : 0.0 ns / 0.0 ns
- 8192 : 0.0 ns / 0.0 ns
- 16384 : 0.0 ns / 0.0 ns
- 32768 : 0.0 ns / 0.0 ns
- 65536 : 8.3 ns / 12.9 ns
- 131072 : 12.2 ns / 16.6 ns
- 262144 : 16.9 ns / 20.8 ns
- 524288 : 24.1 ns / 29.8 ns
- 1048576 : 95.6 ns / 158.9 ns
- 2097152 : 131.3 ns / 219.8 ns
- 4194304 : 149.9 ns / 250.0 ns
- 8388608 : 309.9 ns / 490.1 ns
- 16777216 : 331.3 ns / 515.0 ns
- 33554432 : 356.0 ns / 559.7 ns
- 67108864 : 398.5 ns / 631.7 ns
Advertisement
Add Comment
Please, Sign In to add comment