Advertisement
Guest User

RK3328-CC tinymembench (kernel 4.4.114, 1.4 GHz)

a guest
May 23rd, 2018
151
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.21 KB | None | 0 0
  1. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  2.  
  3. ==========================================================================
  4. == Memory bandwidth tests ==
  5. == ==
  6. == Note 1: 1MB = 1000000 bytes ==
  7. == Note 2: Results for 'copy' tests show how many bytes can be ==
  8. == copied per second (adding together read and writen ==
  9. == bytes would have provided twice higher numbers) ==
  10. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  11. == to first fetch data into it, and only then write it to the ==
  12. == destination (source -> L1 cache, L1 cache -> destination) ==
  13. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  14. == brackets ==
  15. ==========================================================================
  16.  
  17. C copy backwards : 1732.0 MB/s (1.1%)
  18. C copy backwards (32 byte blocks) : 1715.0 MB/s (0.5%)
  19. C copy backwards (64 byte blocks) : 1711.0 MB/s (0.5%)
  20. C copy : 1730.6 MB/s (1.7%)
  21. C copy prefetched (32 bytes step) : 1507.7 MB/s (0.2%)
  22. C copy prefetched (64 bytes step) : 1823.1 MB/s (0.3%)
  23. C 2-pass copy : 1977.7 MB/s (0.2%)
  24. C 2-pass copy prefetched (32 bytes step) : 1482.3 MB/s (0.2%)
  25. C 2-pass copy prefetched (64 bytes step) : 1438.5 MB/s (0.2%)
  26. C fill : 7718.3 MB/s (0.4%)
  27. C fill (shuffle within 16 byte blocks) : 7717.3 MB/s (0.4%)
  28. C fill (shuffle within 32 byte blocks) : 7707.1 MB/s (0.4%)
  29. C fill (shuffle within 64 byte blocks) : 7706.5 MB/s (0.4%)
  30. ---
  31. standard memcpy : 1700.1 MB/s (0.2%)
  32. standard memset : 7726.6 MB/s (0.4%)
  33. ---
  34. NEON LDP/STP copy : 1949.6 MB/s (0.3%)
  35. NEON LDP/STP copy pldl2strm (32 bytes step) : 1581.1 MB/s (0.4%)
  36. NEON LDP/STP copy pldl2strm (64 bytes step) : 1819.7 MB/s (0.2%)
  37. NEON LDP/STP copy pldl1keep (32 bytes step) : 2036.0 MB/s (0.2%)
  38. NEON LDP/STP copy pldl1keep (64 bytes step) : 2049.7 MB/s (0.3%)
  39. NEON LD1/ST1 copy : 1911.5 MB/s (0.3%)
  40. NEON STP fill : 7732.4 MB/s (0.4%)
  41. NEON STNP fill : 2621.4 MB/s (0.6%)
  42. ARM LDP/STP copy : 1959.8 MB/s (0.2%)
  43. ARM STP fill : 7729.9 MB/s (0.4%)
  44. ARM STNP fill : 2626.1 MB/s (0.7%)
  45.  
  46. ==========================================================================
  47. == Framebuffer read tests. ==
  48. == ==
  49. == Many ARM devices use a part of the system memory as the framebuffer, ==
  50. == typically mapped as uncached but with write-combining enabled. ==
  51. == Writes to such framebuffers are quite fast, but reads are much ==
  52. == slower and very sensitive to the alignment and the selection of ==
  53. == CPU instructions which are used for accessing memory. ==
  54. == ==
  55. == Many x86 systems allocate the framebuffer in the GPU memory, ==
  56. == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
  57. == PCI-E is asymmetric and handles reads a lot worse than writes. ==
  58. == ==
  59. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  60. == or preferably >300 MB/s), then using the shadow framebuffer layer ==
  61. == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
  62. == performance improvement. For example, the xf86-video-fbturbo DDX ==
  63. == uses this trick. ==
  64. ==========================================================================
  65.  
  66. NEON LDP/STP copy (from framebuffer) : 306.6 MB/s (0.1%)
  67. NEON LDP/STP 2-pass copy (from framebuffer) : 360.6 MB/s (0.1%)
  68. NEON LD1/ST1 copy (from framebuffer) : 96.5 MB/s
  69. NEON LD1/ST1 2-pass copy (from framebuffer) : 101.5 MB/s
  70. ARM LDP/STP copy (from framebuffer) : 176.7 MB/s (0.1%)
  71. ARM LDP/STP 2-pass copy (from framebuffer) : 195.0 MB/s (0.2%)
  72.  
  73. ==========================================================================
  74. == Memory latency test ==
  75. == ==
  76. == Average time is measured for random memory accesses in the buffers ==
  77. == of different sizes. The larger is the buffer, the more significant ==
  78. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  79. == accesses. For extremely large buffer sizes we are expecting to see ==
  80. == page table walk with several requests to SDRAM for almost every ==
  81. == memory access (though 64MiB is not nearly large enough to experience ==
  82. == this effect to its fullest). ==
  83. == ==
  84. == Note 1: All the numbers are representing extra time, which needs to ==
  85. == be added to L1 cache latency. The cycle timings for L1 cache ==
  86. == latency can be usually found in the processor documentation. ==
  87. == Note 2: Dual random read means that we are simultaneously performing ==
  88. == two independent memory accesses at a time. In the case if ==
  89. == the memory subsystem can't handle multiple outstanding ==
  90. == requests, dual random read has the same timings as two ==
  91. == single reads performed one after another. ==
  92. ==========================================================================
  93.  
  94. block size : single random read / dual random read
  95. 1024 : 0.0 ns / 0.0 ns
  96. 2048 : 0.0 ns / 0.0 ns
  97. 4096 : 0.0 ns / 0.0 ns
  98. 8192 : 0.0 ns / 0.0 ns
  99. 16384 : 0.0 ns / 0.0 ns
  100. 32768 : 0.1 ns / 0.1 ns
  101. 65536 : 4.9 ns / 8.4 ns
  102. 131072 : 7.6 ns / 11.7 ns
  103. 262144 : 10.3 ns / 15.0 ns
  104. 524288 : 55.8 ns / 87.9 ns
  105. 1048576 : 83.5 ns / 118.6 ns
  106. 2097152 : 98.8 ns / 131.6 ns
  107. 4194304 : 111.1 ns / 142.5 ns
  108. 8388608 : 118.5 ns / 148.9 ns
  109. 16777216 : 123.2 ns / 153.1 ns
  110. 33554432 : 126.8 ns / 156.4 ns
  111. 67108864 : 138.8 ns / 177.5 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement