Advertisement
Guest User

ODROID-HC1 tinymembench, Ubuntu Xenial, 4.9.38

a guest
Aug 16th, 2017
257
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.90 KB | None | 0 0
  1. root@odroid:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
  2. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  3.  
  4. ==========================================================================
  5. == Memory bandwidth tests ==
  6. == ==
  7. == Note 1: 1MB = 1000000 bytes ==
  8. == Note 2: Results for 'copy' tests show how many bytes can be ==
  9. == copied per second (adding together read and writen ==
  10. == bytes would have provided twice higher numbers) ==
  11. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  12. == to first fetch data into it, and only then write it to the ==
  13. == destination (source -> L1 cache, L1 cache -> destination) ==
  14. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  15. == brackets ==
  16. ==========================================================================
  17.  
  18. C copy backwards : 1172.9 MB/s (0.7%)
  19. C copy backwards (32 byte blocks) : 1155.9 MB/s
  20. C copy backwards (64 byte blocks) : 2331.3 MB/s (2.3%)
  21. C copy : 2502.8 MB/s (2.6%)
  22. C copy prefetched (32 bytes step) : 2800.5 MB/s
  23. C copy prefetched (64 bytes step) : 2876.1 MB/s (2.9%)
  24. C 2-pass copy : 1347.3 MB/s
  25. C 2-pass copy prefetched (32 bytes step) : 1614.7 MB/s (1.3%)
  26. C 2-pass copy prefetched (64 bytes step) : 1631.4 MB/s
  27. C fill : 4930.6 MB/s (1.4%)
  28. C fill (shuffle within 16 byte blocks) : 1832.8 MB/s
  29. C fill (shuffle within 32 byte blocks) : 1832.9 MB/s (0.8%)
  30. C fill (shuffle within 64 byte blocks) : 1936.2 MB/s (0.9%)
  31. ---
  32. standard memcpy : 2303.7 MB/s (3.7%)
  33. standard memset : 4929.3 MB/s (1.7%)
  34. ---
  35. NEON read : 3379.4 MB/s
  36. NEON read prefetched (32 bytes step) : 4283.2 MB/s (1.1%)
  37. NEON read prefetched (64 bytes step) : 4295.5 MB/s (1.0%)
  38. NEON read 2 data streams : 3440.3 MB/s
  39. NEON read 2 data streams prefetched (32 bytes step) : 4419.4 MB/s (1.6%)
  40. NEON read 2 data streams prefetched (64 bytes step) : 4426.1 MB/s (0.9%)
  41. NEON copy : 2635.6 MB/s (2.1%)
  42. NEON copy prefetched (32 bytes step) : 2925.9 MB/s
  43. NEON copy prefetched (64 bytes step) : 2917.7 MB/s (2.5%)
  44. NEON unrolled copy : 2266.2 MB/s
  45. NEON unrolled copy prefetched (32 bytes step) : 3245.2 MB/s (2.2%)
  46. NEON unrolled copy prefetched (64 bytes step) : 3267.7 MB/s (3.0%)
  47. NEON copy backwards : 1222.9 MB/s
  48. NEON copy backwards prefetched (32 bytes step) : 1431.0 MB/s (0.9%)
  49. NEON copy backwards prefetched (64 bytes step) : 1430.2 MB/s
  50. NEON 2-pass copy : 2094.1 MB/s (1.5%)
  51. NEON 2-pass copy prefetched (32 bytes step) : 2277.8 MB/s (1.0%)
  52. NEON 2-pass copy prefetched (64 bytes step) : 2279.7 MB/s (1.0%)
  53. NEON unrolled 2-pass copy : 1397.4 MB/s
  54. NEON unrolled 2-pass copy prefetched (32 bytes step) : 1734.3 MB/s (1.2%)
  55. NEON unrolled 2-pass copy prefetched (64 bytes step) : 1749.9 MB/s
  56. NEON fill : 4914.0 MB/s (1.2%)
  57. NEON fill backwards : 1842.1 MB/s
  58. VFP copy : 2462.7 MB/s (1.8%)
  59. VFP 2-pass copy : 1340.5 MB/s
  60. ARM fill (STRD) : 4927.1 MB/s (1.7%)
  61. ARM fill (STM with 8 registers) : 4913.2 MB/s (1.0%)
  62. ARM fill (STM with 4 registers) : 4921.0 MB/s (1.1%)
  63. ARM copy prefetched (incr pld) : 2949.9 MB/s (2.3%)
  64. ARM copy prefetched (wrap pld) : 2780.9 MB/s
  65. ARM 2-pass copy prefetched (incr pld) : 1681.3 MB/s (1.0%)
  66. ARM 2-pass copy prefetched (wrap pld) : 1637.1 MB/s
  67.  
  68. ==========================================================================
  69. == Memory latency test ==
  70. == ==
  71. == Average time is measured for random memory accesses in the buffers ==
  72. == of different sizes. The larger is the buffer, the more significant ==
  73. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  74. == accesses. For extremely large buffer sizes we are expecting to see ==
  75. == page table walk with several requests to SDRAM for almost every ==
  76. == memory access (though 64MiB is not nearly large enough to experience ==
  77. == this effect to its fullest). ==
  78. == ==
  79. == Note 1: All the numbers are representing extra time, which needs to ==
  80. == be added to L1 cache latency. The cycle timings for L1 cache ==
  81. == latency can be usually found in the processor documentation. ==
  82. == Note 2: Dual random read means that we are simultaneously performing ==
  83. == two independent memory accesses at a time. In the case if ==
  84. == the memory subsystem can't handle multiple outstanding ==
  85. == requests, dual random read has the same timings as two ==
  86. == single reads performed one after another. ==
  87. ==========================================================================
  88.  
  89. block size : single random read / dual random read
  90. 1024 : 0.0 ns / 0.0 ns
  91. 2048 : 0.0 ns / 0.0 ns
  92. 4096 : 0.0 ns / 0.0 ns
  93. 8192 : 0.0 ns / 0.0 ns
  94. 16384 : 0.0 ns / 0.0 ns
  95. 32768 : 0.0 ns / 0.0 ns
  96. 65536 : 4.4 ns / 6.5 ns
  97. 131072 : 6.7 ns / 8.7 ns
  98. 262144 : 9.6 ns / 11.6 ns
  99. 524288 : 11.1 ns / 13.3 ns
  100. 1048576 : 12.0 ns / 14.3 ns
  101. 2097152 : 23.0 ns / 30.7 ns
  102. 4194304 : 95.6 ns / 143.8 ns
  103. 8388608 : 134.6 ns / 182.5 ns
  104. 16777216 : 154.2 ns / 197.8 ns
  105. 33554432 : 169.9 ns / 217.8 ns
  106. 67108864 : 179.8 ns / 230.8 ns
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement