Advertisement
Guest User

Untitled

a guest
Nov 27th, 2016
131
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.80 KB | None | 0 0
  1. A33 Q8 tablet
  2. 2c789849709d837b4bd114c11ed2d9bdc65afbc6
  3. CFLAGS='-O3 -mcpu=cortex-a7'
  4.  
  5. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  6.  
  7. ==========================================================================
  8. == Memory bandwidth tests ==
  9. == ==
  10. == Note 1: 1MB = 1000000 bytes ==
  11. == Note 2: Results for 'copy' tests show how many bytes can be ==
  12. == copied per second (adding together read and writen ==
  13. == bytes would have provided twice higher numbers) ==
  14. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  15. == to first fetch data into it, and only then write it to the ==
  16. == destination (source -> L1 cache, L1 cache -> destination) ==
  17. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  18. == brackets ==
  19. ==========================================================================
  20.  
  21. C copy backwards : 201.7 MB/s (0.9%)
  22. C copy backwards (32 byte blocks) : 601.4 MB/s (5.8%)
  23. C copy backwards (64 byte blocks) : 627.4 MB/s (0.1%)
  24. C copy : 626.7 MB/s
  25. C copy prefetched (32 bytes step) : 629.6 MB/s
  26. C copy prefetched (64 bytes step) : 655.2 MB/s
  27. C 2-pass copy : 592.2 MB/s
  28. C 2-pass copy prefetched (32 bytes step) : 592.0 MB/s
  29. C 2-pass copy prefetched (64 bytes step) : 605.1 MB/s
  30. C fill : 1762.0 MB/s
  31. C fill (shuffle within 16 byte blocks) : 1762.3 MB/s
  32. C fill (shuffle within 32 byte blocks) : 272.7 MB/s
  33. C fill (shuffle within 64 byte blocks) : 276.6 MB/s
  34. ---
  35. standard memcpy : 646.3 MB/s
  36. standard memset : 1762.4 MB/s
  37. ---
  38. NEON read : 1023.3 MB/s
  39. NEON read prefetched (32 bytes step) : 1144.7 MB/s
  40. NEON read prefetched (64 bytes step) : 1164.9 MB/s
  41. NEON read 2 data streams : 304.5 MB/s
  42. NEON read 2 data streams prefetched (32 bytes step) : 572.2 MB/s
  43. NEON read 2 data streams prefetched (64 bytes step) : 597.8 MB/s
  44. NEON copy : 628.0 MB/s
  45. NEON copy prefetched (32 bytes step) : 630.3 MB/s
  46. NEON copy prefetched (64 bytes step) : 679.2 MB/s
  47. NEON unrolled copy : 620.7 MB/s
  48. NEON unrolled copy prefetched (32 bytes step) : 641.0 MB/s (0.1%)
  49. NEON unrolled copy prefetched (64 bytes step) : 668.6 MB/s
  50. NEON copy backwards : 601.0 MB/s
  51. NEON copy backwards prefetched (32 bytes step) : 622.5 MB/s
  52. NEON copy backwards prefetched (64 bytes step) : 670.4 MB/s
  53. NEON 2-pass copy : 597.5 MB/s
  54. NEON 2-pass copy prefetched (32 bytes step) : 643.7 MB/s
  55. NEON 2-pass copy prefetched (64 bytes step) : 658.8 MB/s
  56. NEON unrolled 2-pass copy : 584.7 MB/s
  57. NEON unrolled 2-pass copy prefetched (32 bytes step) : 578.1 MB/s
  58. NEON unrolled 2-pass copy prefetched (64 bytes step) : 598.7 MB/s
  59. NEON fill : 1762.7 MB/s
  60. NEON fill backwards : 1761.9 MB/s
  61. VFP copy : 622.5 MB/s (0.8%)
  62. VFP 2-pass copy : 586.6 MB/s
  63. ARM fill (STRD) : 1762.5 MB/s
  64. ARM fill (STM with 8 registers) : 1762.4 MB/s
  65. ARM fill (STM with 4 registers) : 1762.2 MB/s
  66. ARM copy prefetched (incr pld) : 670.5 MB/s
  67. ARM copy prefetched (wrap pld) : 619.5 MB/s
  68. ARM 2-pass copy prefetched (incr pld) : 634.3 MB/s
  69. ARM 2-pass copy prefetched (wrap pld) : 605.6 MB/s
  70.  
  71. ==========================================================================
  72. == Framebuffer read tests. ==
  73. == ==
  74. == Many ARM devices use a part of the system memory as the framebuffer, ==
  75. == typically mapped as uncached but with write-combining enabled. ==
  76. == Writes to such framebuffers are quite fast, but reads are much ==
  77. == slower and very sensitive to the alignment and the selection of ==
  78. == CPU instructions which are used for accessing memory. ==
  79. == ==
  80. == Many x86 systems allocate the framebuffer in the GPU memory, ==
  81. == accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
  82. == PCI-E is asymmetric and handles reads a lot worse than writes. ==
  83. == ==
  84. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  85. == or preferably >300 MB/s), then using the shadow framebuffer layer ==
  86. == is not necessary in Xorg DDX drivers, resulting in a nice overall ==
  87. == performance improvement. For example, the xf86-video-fbturbo DDX ==
  88. == uses this trick. ==
  89. ==========================================================================
  90.  
  91. NEON read (from framebuffer) : 44.2 MB/s
  92. NEON copy (from framebuffer) : 42.3 MB/s (0.1%)
  93. NEON 2-pass copy (from framebuffer) : 42.8 MB/s
  94. NEON unrolled copy (from framebuffer) : 42.9 MB/s
  95. NEON 2-pass unrolled copy (from framebuffer) : 42.4 MB/s
  96. VFP copy (from framebuffer) : 247.7 MB/s
  97. VFP 2-pass copy (from framebuffer) : 233.5 MB/s
  98. ARM copy (from framebuffer) : 151.0 MB/s
  99. ARM 2-pass copy (from framebuffer) : 144.1 MB/s
  100.  
  101. ==========================================================================
  102. == Memory latency test ==
  103. == ==
  104. == Average time is measured for random memory accesses in the buffers ==
  105. == of different sizes. The larger is the buffer, the more significant ==
  106. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  107. == accesses. For extremely large buffer sizes we are expecting to see ==
  108. == page table walk with several requests to SDRAM for almost every ==
  109. == memory access (though 64MiB is not nearly large enough to experience ==
  110. == this effect to its fullest). ==
  111. == ==
  112. == Note 1: All the numbers are representing extra time, which needs to ==
  113. == be added to L1 cache latency. The cycle timings for L1 cache ==
  114. == latency can be usually found in the processor documentation. ==
  115. == Note 2: Dual random read means that we are simultaneously performing ==
  116. == two independent memory accesses at a time. In the case if ==
  117. == the memory subsystem can't handle multiple outstanding ==
  118. == requests, dual random read has the same timings as two ==
  119. == single reads performed one after another. ==
  120. ==========================================================================
  121.  
  122. block size : single random read / dual random read
  123. 1024 : 0.0 ns / 0.0 ns
  124. 2048 : 0.0 ns / 0.0 ns
  125. 4096 : 0.0 ns / 0.0 ns
  126. 8192 : 0.0 ns / 0.0 ns
  127. 16384 : 0.0 ns / 0.0 ns
  128. 32768 : 0.0 ns / 0.0 ns
  129. 65536 : 6.2 ns / 10.8 ns
  130. 131072 : 9.6 ns / 15.1 ns
  131. 262144 : 11.4 ns / 16.8 ns
  132. 524288 : 14.3 ns / 20.3 ns
  133. 1048576 : 123.0 ns / 193.8 ns
  134. 2097152 : 186.6 ns / 258.9 ns
  135. 4194304 : 219.4 ns / 283.5 ns
  136. 8388608 : 239.2 ns / 297.6 ns
  137. 16777216 : 255.7 ns / 316.0 ns
  138. 33554432 : 274.1 ns / 345.3 ns
  139. 67108864 : 302.8 ns / 400.2 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement