SHARE
TWEET

Untitled

a guest Feb 2nd, 2018 260 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. OrangePi one plus (H6)
  2.  
  3. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  4.  
  5. ==========================================================================
  6. == Memory bandwidth tests                                               ==
  7. ==                                                                      ==
  8. == Note 1: 1MB = 1000000 bytes                                          ==
  9. == Note 2: Results for 'copy' tests show how many bytes can be          ==
  10. ==         copied per second (adding together read and writen           ==
  11. ==         bytes would have provided twice higher numbers)              ==
  12. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  13. ==         to first fetch data into it, and only then write it to the   ==
  14. ==         destination (source -> L1 cache, L1 cache -> destination)    ==
  15. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
  16. ==         brackets                                                     ==
  17. ==========================================================================
  18.  
  19.  C copy backwards                                     :   1597.0 MB/s (16.0%)
  20.  C copy backwards (32 byte blocks)                    :   1635.7 MB/s (1.3%)
  21.  C copy backwards (64 byte blocks)                    :   1628.8 MB/s (1.1%)
  22.  C copy                                               :   1616.4 MB/s (0.6%)
  23.  C copy prefetched (32 bytes step)                    :   1220.3 MB/s
  24.  C copy prefetched (64 bytes step)                    :   1215.9 MB/s
  25.  C 2-pass copy                                        :   1471.7 MB/s
  26.  C 2-pass copy prefetched (32 bytes step)             :   1060.6 MB/s
  27.  C 2-pass copy prefetched (64 bytes step)             :    954.4 MB/s
  28.  C fill                                               :   5679.0 MB/s
  29.  C fill (shuffle within 16 byte blocks)               :   5681.3 MB/s
  30.  C fill (shuffle within 32 byte blocks)               :   5683.7 MB/s
  31.  C fill (shuffle within 64 byte blocks)               :   5683.3 MB/s
  32.  ---
  33.  standard memcpy                                      :   1652.0 MB/s
  34.  standard memset                                      :   5685.0 MB/s
  35.  ---
  36.  NEON LDP/STP copy                                    :   1646.3 MB/s (0.3%)
  37.  NEON LDP/STP copy pldl2strm (32 bytes step)          :   1114.1 MB/s (1.1%)
  38.  NEON LDP/STP copy pldl2strm (64 bytes step)          :   1366.6 MB/s (0.2%)
  39.  NEON LDP/STP copy pldl1keep (32 bytes step)          :   1756.2 MB/s
  40.  NEON LDP/STP copy pldl1keep (64 bytes step)          :   1746.1 MB/s
  41.  NEON LD1/ST1 copy                                    :   1640.8 MB/s
  42.  NEON STP fill                                        :   5685.8 MB/s
  43.  NEON STNP fill                                       :   2988.4 MB/s (1.1%)
  44.  ARM LDP/STP copy                                     :   1645.1 MB/s (0.3%)
  45.  ARM STP fill                                         :   5683.5 MB/s
  46.  ARM STNP fill                                        :   2988.5 MB/s (0.8%)
  47.  
  48. ==========================================================================
  49. == Framebuffer read tests.                                              ==
  50. ==                                                                      ==
  51. == Many ARM devices use a part of the system memory as the framebuffer, ==
  52. == typically mapped as uncached but with write-combining enabled.       ==
  53. == Writes to such framebuffers are quite fast, but reads are much       ==
  54. == slower and very sensitive to the alignment and the selection of      ==
  55. == CPU instructions which are used for accessing memory.                ==
  56. ==                                                                      ==
  57. == Many x86 systems allocate the framebuffer in the GPU memory,         ==
  58. == accessible for the CPU via a relatively slow PCI-E bus. Moreover,    ==
  59. == PCI-E is asymmetric and handles reads a lot worse than writes.       ==
  60. ==                                                                      ==
  61. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  62. == or preferably >300 MB/s), then using the shadow framebuffer layer    ==
  63. == is not necessary in Xorg DDX drivers, resulting in a nice overall    ==
  64. == performance improvement. For example, the xf86-video-fbturbo DDX     ==
  65. == uses this trick.                                                     ==
  66. ==========================================================================
  67.  
  68.  NEON LDP/STP copy (from framebuffer)                 :    217.9 MB/s
  69.  NEON LDP/STP 2-pass copy (from framebuffer)          :    209.7 MB/s
  70.  NEON LD1/ST1 copy (from framebuffer)                 :     56.6 MB/s
  71.  NEON LD1/ST1 2-pass copy (from framebuffer)          :     56.1 MB/s
  72.  ARM LDP/STP copy (from framebuffer)                  :    110.6 MB/s
  73.  ARM LDP/STP 2-pass copy (from framebuffer)           :    108.2 MB/s
  74.  
  75. ==========================================================================
  76. == Memory latency test                                                  ==
  77. ==                                                                      ==
  78. == Average time is measured for random memory accesses in the buffers   ==
  79. == of different sizes. The larger is the buffer, the more significant   ==
  80. == are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
  81. == accesses. For extremely large buffer sizes we are expecting to see   ==
  82. == page table walk with several requests to SDRAM for almost every      ==
  83. == memory access (though 64MiB is not nearly large enough to experience ==
  84. == this effect to its fullest).                                         ==
  85. ==                                                                      ==
  86. == Note 1: All the numbers are representing extra time, which needs to  ==
  87. ==         be added to L1 cache latency. The cycle timings for L1 cache ==
  88. ==         latency can be usually found in the processor documentation. ==
  89. == Note 2: Dual random read means that we are simultaneously performing ==
  90. ==         two independent memory accesses at a time. In the case if    ==
  91. ==         the memory subsystem can't handle multiple outstanding       ==
  92. ==         requests, dual random read has the same timings as two       ==
  93. ==         single reads performed one after another.                    ==
  94. ==========================================================================
  95.  
  96. block size : single random read / dual random read
  97.       1024 :    0.0 ns          /     0.0 ns
  98.       2048 :    0.0 ns          /     0.0 ns
  99.       4096 :    0.0 ns          /     0.0 ns
  100.       8192 :    0.0 ns          /     0.0 ns
  101.      16384 :    0.0 ns          /     0.0 ns
  102.      32768 :    0.0 ns          /     0.0 ns
  103.      65536 :    3.8 ns          /     6.4 ns
  104.     131072 :    5.8 ns          /     8.9 ns
  105.     262144 :    6.9 ns          /    10.2 ns
  106.     524288 :    8.0 ns          /    11.1 ns
  107.    1048576 :   74.7 ns          /   115.1 ns
  108.    2097152 :  110.0 ns          /   148.5 ns
  109.    4194304 :  132.3 ns          /   164.9 ns
  110.    8388608 :  144.7 ns          /   174.3 ns
  111.   16777216 :  152.0 ns          /   179.4 ns
  112.   33554432 :  156.3 ns          /   182.9 ns
  113.   67108864 :  158.5 ns          /   184.9 ns
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
 
Top