Advertisement
Guest User

NanoPC T4 with RK 4.4.132 kernel

a guest
Jun 17th, 2018
277
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 14.21 KB | None | 0 0
  1. root@nanopct4:~/tinymembench# taskset -c 5 ./tinymembench
  2. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  3.  
  4. ==========================================================================
  5. == Memory bandwidth tests ==
  6. == ==
  7. == Note 1: 1MB = 1000000 bytes ==
  8. == Note 2: Results for 'copy' tests show how many bytes can be ==
  9. == copied per second (adding together read and writen ==
  10. == bytes would have provided twice higher numbers) ==
  11. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  12. == to first fetch data into it, and only then write it to the ==
  13. == destination (source -> L1 cache, L1 cache -> destination) ==
  14. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  15. == brackets ==
  16. ==========================================================================
  17.  
  18. C copy backwards : 2840.3 MB/s
  19. C copy backwards (32 byte blocks) : 2837.5 MB/s
  20. C copy backwards (64 byte blocks) : 2676.6 MB/s
  21. C copy : 2714.3 MB/s
  22. C copy prefetched (32 bytes step) : 2667.2 MB/s
  23. C copy prefetched (64 bytes step) : 2678.9 MB/s
  24. C 2-pass copy : 2532.2 MB/s
  25. C 2-pass copy prefetched (32 bytes step) : 2440.8 MB/s
  26. C 2-pass copy prefetched (64 bytes step) : 2449.1 MB/s
  27. C fill : 4899.4 MB/s (0.4%)
  28. C fill (shuffle within 16 byte blocks) : 4899.5 MB/s
  29. C fill (shuffle within 32 byte blocks) : 4899.1 MB/s
  30. C fill (shuffle within 64 byte blocks) : 4900.8 MB/s
  31. ---
  32. standard memcpy : 2837.4 MB/s
  33. standard memset : 4899.5 MB/s (0.4%)
  34. ---
  35. NEON LDP/STP copy : 2833.5 MB/s
  36. NEON LDP/STP copy pldl2strm (32 bytes step) : 2850.7 MB/s
  37. NEON LDP/STP copy pldl2strm (64 bytes step) : 2851.3 MB/s
  38. NEON LDP/STP copy pldl1keep (32 bytes step) : 2781.1 MB/s
  39. NEON LDP/STP copy pldl1keep (64 bytes step) : 2780.1 MB/s
  40. NEON LD1/ST1 copy : 2832.3 MB/s
  41. NEON STP fill : 4898.3 MB/s (0.4%)
  42. NEON STNP fill : 4867.5 MB/s (0.1%)
  43. ARM LDP/STP copy : 2834.4 MB/s
  44. ARM STP fill : 4898.9 MB/s (0.4%)
  45. ARM STNP fill : 4868.8 MB/s (0.1%)
  46.  
  47. ==========================================================================
  48. == Framebuffer read tests. ==
  49. == ==
  50. == Many ARM devices use a part of the system memory as the framebuffer, ==
  51. == typically mapped as uncached but with write-combining enabled. ==
  52. == Writes to such framebuffers are quite fast, but reads are much ==
  53. == slower and very sensitive to the alignment and the selection of ==
  54. == CPU instructions which are used for accessing memory. ==
  55. == ==
  56. == Many x86 systems allocate the framebuffer in the GPU memory, ==
  57. == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
  58. == PCI-E is asymmetric and handles reads a lot worse than writes. ==
  59. == ==
  60. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  61. == or preferably >300 MB/s), then using the shadow framebuffer layer ==
  62. == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
  63. == performance improvement. For example, the xf86-video-fbturbo DDX ==
  64. == uses this trick. ==
  65. ==========================================================================
  66.  
  67. NEON LDP/STP copy (from framebuffer) : 650.0 MB/s
  68. NEON LDP/STP 2-pass copy (from framebuffer) : 583.0 MB/s
  69. NEON LD1/ST1 copy (from framebuffer) : 684.2 MB/s
  70. NEON LD1/ST1 2-pass copy (from framebuffer) : 627.3 MB/s
  71. ARM LDP/STP copy (from framebuffer) : 468.9 MB/s
  72. ARM LDP/STP 2-pass copy (from framebuffer) : 456.5 MB/s
  73.  
  74. ==========================================================================
  75. == Memory latency test ==
  76. == ==
  77. == Average time is measured for random memory accesses in the buffers ==
  78. == of different sizes. The larger is the buffer, the more significant ==
  79. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  80. == accesses. For extremely large buffer sizes we are expecting to see ==
  81. == page table walk with several requests to SDRAM for almost every ==
  82. == memory access (though 64MiB is not nearly large enough to experience ==
  83. == this effect to its fullest). ==
  84. == ==
  85. == Note 1: All the numbers are representing extra time, which needs to ==
  86. == be added to L1 cache latency. The cycle timings for L1 cache ==
  87. == latency can be usually found in the processor documentation. ==
  88. == Note 2: Dual random read means that we are simultaneously performing ==
  89. == two independent memory accesses at a time. In the case if ==
  90. == the memory subsystem can't handle multiple outstanding ==
  91. == requests, dual random read has the same timings as two ==
  92. == single reads performed one after another. ==
  93. ==========================================================================
  94.  
  95. block size : single random read / dual random read
  96. 1024 : 0.0 ns / 0.0 ns
  97. 2048 : 0.0 ns / 0.0 ns
  98. 4096 : 0.0 ns / 0.0 ns
  99. 8192 : 0.0 ns / 0.0 ns
  100. 16384 : 0.0 ns / 0.0 ns
  101. 32768 : 0.0 ns / 0.0 ns
  102. 65536 : 4.5 ns / 7.2 ns
  103. 131072 : 6.8 ns / 9.7 ns
  104. 262144 : 9.9 ns / 12.8 ns
  105. 524288 : 11.4 ns / 14.7 ns
  106. 1048576 : 21.2 ns / 32.7 ns
  107. 2097152 : 112.1 ns / 171.0 ns
  108. 4194304 : 155.9 ns / 211.2 ns
  109. 8388608 : 184.0 ns / 231.5 ns
  110. 16777216 : 198.8 ns / 239.6 ns
  111. 33554432 : 206.5 ns / 245.6 ns
  112. 67108864 : 214.1 ns / 260.5 ns
  113. root@nanopct4:~/tinymembench# taskset -c 3 ./tinymembench
  114. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  115.  
  116. ==========================================================================
  117. == Memory bandwidth tests ==
  118. == ==
  119. == Note 1: 1MB = 1000000 bytes ==
  120. == Note 2: Results for 'copy' tests show how many bytes can be ==
  121. == copied per second (adding together read and writen ==
  122. == bytes would have provided twice higher numbers) ==
  123. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  124. == to first fetch data into it, and only then write it to the ==
  125. == destination (source -> L1 cache, L1 cache -> destination) ==
  126. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  127. == brackets ==
  128. ==========================================================================
  129.  
  130. C copy backwards : 1375.6 MB/s (0.5%)
  131. C copy backwards (32 byte blocks) : 1365.0 MB/s (0.6%)
  132. C copy backwards (64 byte blocks) : 1373.1 MB/s (0.3%)
  133. C copy : 1441.4 MB/s (0.6%)
  134. C copy prefetched (32 bytes step) : 1041.8 MB/s
  135. C copy prefetched (64 bytes step) : 1182.8 MB/s
  136. C 2-pass copy : 1241.1 MB/s
  137. C 2-pass copy prefetched (32 bytes step) : 882.9 MB/s
  138. C 2-pass copy prefetched (64 bytes step) : 779.0 MB/s
  139. C fill : 4787.2 MB/s
  140. C fill (shuffle within 16 byte blocks) : 4786.6 MB/s
  141. C fill (shuffle within 32 byte blocks) : 4786.2 MB/s
  142. C fill (shuffle within 64 byte blocks) : 4786.0 MB/s
  143. ---
  144. standard memcpy : 1459.5 MB/s
  145. standard memset : 4790.2 MB/s
  146. ---
  147. NEON LDP/STP copy : 1489.1 MB/s
  148. NEON LDP/STP copy pldl2strm (32 bytes step) : 988.7 MB/s (0.4%)
  149. NEON LDP/STP copy pldl2strm (64 bytes step) : 1232.9 MB/s
  150. NEON LDP/STP copy pldl1keep (32 bytes step) : 1603.6 MB/s
  151. NEON LDP/STP copy pldl1keep (64 bytes step) : 1602.6 MB/s
  152. NEON LD1/ST1 copy : 1469.8 MB/s
  153. NEON STP fill : 4790.4 MB/s
  154. NEON STNP fill : 2682.5 MB/s (0.3%)
  155. ARM LDP/STP copy : 1490.0 MB/s
  156. ARM STP fill : 4790.5 MB/s
  157. ARM STNP fill : 2701.0 MB/s (0.6%)
  158.  
  159. ==========================================================================
  160. == Framebuffer read tests. ==
  161. == ==
  162. == Many ARM devices use a part of the system memory as the framebuffer, ==
  163. == typically mapped as uncached but with write-combining enabled. ==
  164. == Writes to such framebuffers are quite fast, but reads are much ==
  165. == slower and very sensitive to the alignment and the selection of ==
  166. == CPU instructions which are used for accessing memory. ==
  167. == ==
  168. == Many x86 systems allocate the framebuffer in the GPU memory, ==
  169. == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
  170. == PCI-E is asymmetric and handles reads a lot worse than writes. ==
  171. == ==
  172. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  173. == or preferably >300 MB/s), then using the shadow framebuffer layer ==
  174. == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
  175. == performance improvement. For example, the xf86-video-fbturbo DDX ==
  176. == uses this trick. ==
  177. ==========================================================================
  178.  
  179. NEON LDP/STP copy (from framebuffer) : 199.0 MB/s
  180. NEON LDP/STP 2-pass copy (from framebuffer) : 184.2 MB/s
  181. NEON LD1/ST1 copy (from framebuffer) : 49.0 MB/s
  182. NEON LD1/ST1 2-pass copy (from framebuffer) : 47.6 MB/s
  183. ARM LDP/STP copy (from framebuffer) : 99.5 MB/s
  184. ARM LDP/STP 2-pass copy (from framebuffer) : 94.0 MB/s
  185.  
  186. ==========================================================================
  187. == Memory latency test ==
  188. == ==
  189. == Average time is measured for random memory accesses in the buffers ==
  190. == of different sizes. The larger is the buffer, the more significant ==
  191. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  192. == accesses. For extremely large buffer sizes we are expecting to see ==
  193. == page table walk with several requests to SDRAM for almost every ==
  194. == memory access (though 64MiB is not nearly large enough to experience ==
  195. == this effect to its fullest). ==
  196. == ==
  197. == Note 1: All the numbers are representing extra time, which needs to ==
  198. == be added to L1 cache latency. The cycle timings for L1 cache ==
  199. == latency can be usually found in the processor documentation. ==
  200. == Note 2: Dual random read means that we are simultaneously performing ==
  201. == two independent memory accesses at a time. In the case if ==
  202. == the memory subsystem can't handle multiple outstanding ==
  203. == requests, dual random read has the same timings as two ==
  204. == single reads performed one after another. ==
  205. ==========================================================================
  206.  
  207. block size : single random read / dual random read
  208. 1024 : 0.0 ns / 0.0 ns
  209. 2048 : 0.0 ns / 0.0 ns
  210. 4096 : 0.0 ns / 0.0 ns
  211. 8192 : 0.0 ns / 0.0 ns
  212. 16384 : 0.0 ns / 0.0 ns
  213. 32768 : 0.1 ns / 0.1 ns
  214. 65536 : 4.9 ns / 8.2 ns
  215. 131072 : 7.5 ns / 11.3 ns
  216. 262144 : 8.8 ns / 12.6 ns
  217. 524288 : 14.7 ns / 21.3 ns
  218. 1048576 : 102.4 ns / 156.3 ns
  219. 2097152 : 149.0 ns / 195.6 ns
  220. 4194304 : 175.1 ns / 214.8 ns
  221. 8388608 : 188.2 ns / 222.4 ns
  222. 16777216 : 195.5 ns / 229.7 ns
  223. 33554432 : 200.2 ns / 234.4 ns
  224. 67108864 : 204.1 ns / 237.5 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement