SHARE
TWEET

Untitled

a guest Aug 11th, 2017 64 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. Debian Jessie, kernel 4.9.37, 825 MHz ddr_freq
  2.  
  3. root@odroidxu4:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
  4. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  5.  
  6. ==========================================================================
  7. == Memory bandwidth tests                                               ==
  8. ==                                                                      ==
  9. == Note 1: 1MB = 1000000 bytes                                          ==
  10. == Note 2: Results for 'copy' tests show how many bytes can be          ==
  11. ==         copied per second (adding together read and writen           ==
  12. ==         bytes would have provided twice higher numbers)              ==
  13. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  14. ==         to first fetch data into it, and only then write it to the   ==
  15. ==         destination (source -> L1 cache, L1 cache -> destination)    ==
  16. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
  17. ==         brackets                                                     ==
  18. ==========================================================================
  19.  
  20.  C copy backwards                                     :   1199.7 MB/s (0.2%)
  21.  C copy backwards (32 byte blocks)                    :   1208.0 MB/s (0.3%)
  22.  C copy backwards (64 byte blocks)                    :   2416.7 MB/s (3.2%)
  23.  C copy                                               :   1234.1 MB/s
  24.  C copy prefetched (32 bytes step)                    :   1471.8 MB/s (0.5%)
  25.  C copy prefetched (64 bytes step)                    :   1470.2 MB/s (0.6%)
  26.  C 2-pass copy                                        :   1159.8 MB/s
  27.  C 2-pass copy prefetched (32 bytes step)             :   1425.0 MB/s (2.0%)
  28.  C 2-pass copy prefetched (64 bytes step)             :   1428.3 MB/s (0.2%)
  29.  C fill                                               :   4865.4 MB/s (0.4%)
  30.  C fill (shuffle within 16 byte blocks)               :   1830.4 MB/s (0.2%)
  31.  C fill (shuffle within 32 byte blocks)               :   1829.4 MB/s (2.7%)
  32.  C fill (shuffle within 64 byte blocks)               :   1912.0 MB/s
  33.  ---
  34.  standard memcpy                                      :   2299.6 MB/s (0.6%)
  35.  standard memset                                      :   4891.3 MB/s (1.2%)
  36.  ---
  37.  NEON read                                            :   3387.1 MB/s (0.1%)
  38.  NEON read prefetched (32 bytes step)                 :   4285.0 MB/s (4.3%)
  39.  NEON read prefetched (64 bytes step)                 :   4297.3 MB/s
  40.  NEON read 2 data streams                             :   3497.1 MB/s (0.3%)
  41.  NEON read 2 data streams prefetched (32 bytes step)  :   4482.2 MB/s (0.4%)
  42.  NEON read 2 data streams prefetched (64 bytes step)  :   4488.3 MB/s
  43.  NEON copy                                            :   2599.5 MB/s (3.1%)
  44.  NEON copy prefetched (32 bytes step)                 :   2926.4 MB/s
  45.  NEON copy prefetched (64 bytes step)                 :   2920.4 MB/s (0.6%)
  46.  NEON unrolled copy                                   :   2263.8 MB/s (0.9%)
  47.  NEON unrolled copy prefetched (32 bytes step)        :   3242.2 MB/s (2.6%)
  48.  NEON unrolled copy prefetched (64 bytes step)        :   3263.9 MB/s (1.1%)
  49.  NEON copy backwards                                  :   1224.7 MB/s (0.3%)
  50.  NEON copy backwards prefetched (32 bytes step)       :   1436.9 MB/s (0.3%)
  51.  NEON copy backwards prefetched (64 bytes step)       :   1435.4 MB/s
  52.  NEON 2-pass copy                                     :   2074.9 MB/s (3.5%)
  53.  NEON 2-pass copy prefetched (32 bytes step)          :   2250.0 MB/s
  54.  NEON 2-pass copy prefetched (64 bytes step)          :   2249.9 MB/s (0.4%)
  55.  NEON unrolled 2-pass copy                            :   1390.9 MB/s (0.7%)
  56.  NEON unrolled 2-pass copy prefetched (32 bytes step) :   1720.8 MB/s (0.4%)
  57.  NEON unrolled 2-pass copy prefetched (64 bytes step) :   1733.9 MB/s
  58.  NEON fill                                            :   4894.5 MB/s (1.1%)
  59.  NEON fill backwards                                  :   1839.1 MB/s
  60.  VFP copy                                             :   2481.4 MB/s (0.6%)
  61.  VFP 2-pass copy                                      :   1324.4 MB/s (0.3%)
  62.  ARM fill (STRD)                                      :   4892.5 MB/s (1.2%)
  63.  ARM fill (STM with 8 registers)                      :   4870.3 MB/s (0.7%)
  64.  ARM fill (STM with 4 registers)                      :   4897.5 MB/s (0.4%)
  65.  ARM copy prefetched (incr pld)                       :   2945.4 MB/s (0.2%)
  66.  ARM copy prefetched (wrap pld)                       :   2776.4 MB/s (3.0%)
  67.  ARM 2-pass copy prefetched (incr pld)                :   1638.2 MB/s (0.4%)
  68.  ARM 2-pass copy prefetched (wrap pld)                :   1616.3 MB/s
  69.  
  70. ==========================================================================
  71. == Framebuffer read tests.                                              ==
  72. ==                                                                      ==
  73. == Many ARM devices use a part of the system memory as the framebuffer, ==
  74. == typically mapped as uncached but with write-combining enabled.       ==
  75. == Writes to such framebuffers are quite fast, but reads are much       ==
  76. == slower and very sensitive to the alignment and the selection of      ==
  77. == CPU instructions which are used for accessing memory.                ==
  78. ==                                                                      ==
  79. == Many x86 systems allocate the framebuffer in the GPU memory,         ==
  80. == accessible for the CPU via a relatively slow PCI-E bus. Moreover,    ==
  81. == PCI-E is asymmetric and handles reads a lot worse than writes.       ==
  82. ==                                                                      ==
  83. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  84. == or preferably >300 MB/s), then using the shadow framebuffer layer    ==
  85. == is not necessary in Xorg DDX drivers, resulting in a nice overall    ==
  86. == performance improvement. For example, the xf86-video-fbturbo DDX     ==
  87. == uses this trick.                                                     ==
  88. ==========================================================================
  89.  
  90.  NEON read (from framebuffer)                         :  12259.8 MB/s (0.2%)
  91.  NEON copy (from framebuffer)                         :   7046.1 MB/s (0.8%)
  92.  NEON 2-pass copy (from framebuffer)                  :   4664.6 MB/s (0.3%)
  93.  NEON unrolled copy (from framebuffer)                :   5722.7 MB/s (0.3%)
  94.  NEON 2-pass unrolled copy (from framebuffer)         :   3815.3 MB/s
  95.  VFP copy (from framebuffer)                          :   5724.9 MB/s
  96.  VFP 2-pass copy (from framebuffer)                   :   3521.2 MB/s
  97.  ARM copy (from framebuffer)                          :   7590.9 MB/s (0.1%)
  98.  ARM 2-pass copy (from framebuffer)                   :   3813.6 MB/s (0.6%)
  99.  
  100. ==========================================================================
  101. == Memory latency test                                                  ==
  102. ==                                                                      ==
  103. == Average time is measured for random memory accesses in the buffers   ==
  104. == of different sizes. The larger is the buffer, the more significant   ==
  105. == are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
  106. == accesses. For extremely large buffer sizes we are expecting to see   ==
  107. == page table walk with several requests to SDRAM for almost every      ==
  108. == memory access (though 64MiB is not nearly large enough to experience ==
  109. == this effect to its fullest).                                         ==
  110. ==                                                                      ==
  111. == Note 1: All the numbers are representing extra time, which needs to  ==
  112. ==         be added to L1 cache latency. The cycle timings for L1 cache ==
  113. ==         latency can be usually found in the processor documentation. ==
  114. == Note 2: Dual random read means that we are simultaneously performing ==
  115. ==         two independent memory accesses at a time. In the case if    ==
  116. ==         the memory subsystem can't handle multiple outstanding       ==
  117. ==         requests, dual random read has the same timings as two       ==
  118. ==         single reads performed one after another.                    ==
  119. ==========================================================================
  120.  
  121. block size : single random read / dual random read
  122.       1024 :    0.0 ns          /     0.1 ns
  123.       2048 :    0.0 ns          /     0.0 ns
  124.       4096 :    0.0 ns          /     0.1 ns
  125.       8192 :    0.0 ns          /     0.1 ns
  126.      16384 :    0.0 ns          /     0.0 ns
  127.      32768 :    0.0 ns          /     0.1 ns
  128.      65536 :    4.4 ns          /     6.8 ns
  129.     131072 :    6.7 ns          /     9.0 ns
  130.     262144 :    9.6 ns          /    11.9 ns
  131.     524288 :   11.0 ns          /    13.6 ns
  132.    1048576 :   12.0 ns          /    14.6 ns
  133.    2097152 :   20.9 ns          /    29.0 ns
  134.    4194304 :   96.1 ns          /   145.2 ns
  135.    8388608 :  134.7 ns          /   184.3 ns
  136.   16777216 :  154.8 ns          /   201.4 ns
  137.   33554432 :  171.2 ns          /   220.1 ns
  138.   67108864 :  181.0 ns          /   232.4 ns
  139.  
  140. Debian Jessie, kernel 4.9.37, 933 MHz ddr_freq
  141.  
  142. root@odroidxu4:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
  143. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  144.  
  145. ==========================================================================
  146. == Memory bandwidth tests                                               ==
  147. ==                                                                      ==
  148. == Note 1: 1MB = 1000000 bytes                                          ==
  149. == Note 2: Results for 'copy' tests show how many bytes can be          ==
  150. ==         copied per second (adding together read and writen           ==
  151. ==         bytes would have provided twice higher numbers)              ==
  152. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  153. ==         to first fetch data into it, and only then write it to the   ==
  154. ==         destination (source -> L1 cache, L1 cache -> destination)    ==
  155. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
  156. ==         brackets                                                     ==
  157. ==========================================================================
  158.  
  159.  C copy backwards                                     :   1239.7 MB/s
  160.  C copy backwards (32 byte blocks)                    :   1248.5 MB/s
  161.  C copy backwards (64 byte blocks)                    :   2470.7 MB/s (2.4%)
  162.  C copy                                               :   1272.5 MB/s
  163.  C copy prefetched (32 bytes step)                    :   1532.0 MB/s (0.3%)
  164.  C copy prefetched (64 bytes step)                    :   1529.5 MB/s
  165.  C 2-pass copy                                        :   1185.6 MB/s (0.2%)
  166.  C 2-pass copy prefetched (32 bytes step)             :   1467.7 MB/s (2.6%)
  167.  C 2-pass copy prefetched (64 bytes step)             :   1469.1 MB/s (0.3%)
  168.  C fill                                               :   5469.8 MB/s (1.3%)
  169.  C fill (shuffle within 16 byte blocks)               :   1933.3 MB/s (0.2%)
  170.  C fill (shuffle within 32 byte blocks)               :   1933.5 MB/s (0.4%)
  171.  C fill (shuffle within 64 byte blocks)               :   2036.7 MB/s (2.8%)
  172.  ---
  173.  standard memcpy                                      :   2469.9 MB/s (0.6%)
  174.  standard memset                                      :   5463.3 MB/s (0.8%)
  175.  ---
  176.  NEON read                                            :   3600.0 MB/s (0.4%)
  177.  NEON read prefetched (32 bytes step)                 :   4624.0 MB/s (2.5%)
  178.  NEON read prefetched (64 bytes step)                 :   4635.5 MB/s (1.1%)
  179.  NEON read 2 data streams                             :   3726.9 MB/s (0.3%)
  180.  NEON read 2 data streams prefetched (32 bytes step)  :   4813.4 MB/s (0.4%)
  181.  NEON read 2 data streams prefetched (64 bytes step)  :   4822.3 MB/s (2.8%)
  182.  NEON copy                                            :   2903.0 MB/s
  183.  NEON copy prefetched (32 bytes step)                 :   3281.4 MB/s (0.6%)
  184.  NEON copy prefetched (64 bytes step)                 :   3277.3 MB/s (0.6%)
  185.  NEON unrolled copy                                   :   2526.1 MB/s (2.3%)
  186.  NEON unrolled copy prefetched (32 bytes step)        :   3613.4 MB/s (0.2%)
  187.  NEON unrolled copy prefetched (64 bytes step)        :   3636.0 MB/s (2.7%)
  188.  NEON copy backwards                                  :   1328.7 MB/s (0.2%)
  189.  NEON copy backwards prefetched (32 bytes step)       :   1550.8 MB/s (0.2%)
  190.  NEON copy backwards prefetched (64 bytes step)       :   1549.1 MB/s (2.0%)
  191.  NEON 2-pass copy                                     :   2131.3 MB/s
  192.  NEON 2-pass copy prefetched (32 bytes step)          :   2426.6 MB/s
  193.  NEON 2-pass copy prefetched (64 bytes step)          :   2445.5 MB/s (0.3%)
  194.  NEON unrolled 2-pass copy                            :   1509.4 MB/s (0.2%)
  195.  NEON unrolled 2-pass copy prefetched (32 bytes step) :   1865.9 MB/s (2.6%)
  196.  NEON unrolled 2-pass copy prefetched (64 bytes step) :   1882.5 MB/s (1.1%)
  197.  NEON fill                                            :   5462.2 MB/s (1.1%)
  198.  NEON fill backwards                                  :   1962.6 MB/s
  199.  VFP copy                                             :   2887.6 MB/s (0.6%)
  200.  VFP 2-pass copy                                      :   1467.9 MB/s (1.9%)
  201.  ARM fill (STRD)                                      :   5425.2 MB/s (0.4%)
  202.  ARM fill (STM with 8 registers)                      :   5448.5 MB/s (0.5%)
  203.  ARM fill (STM with 4 registers)                      :   5462.4 MB/s (0.4%)
  204.  ARM copy prefetched (incr pld)                       :   3327.0 MB/s (3.5%)
  205.  ARM copy prefetched (wrap pld)                       :   3167.3 MB/s (0.6%)
  206.  ARM 2-pass copy prefetched (incr pld)                :   1796.8 MB/s (0.9%)
  207.  ARM 2-pass copy prefetched (wrap pld)                :   1766.6 MB/s
  208.  
  209. ==========================================================================
  210. == Framebuffer read tests.                                              ==
  211. ==                                                                      ==
  212. == Many ARM devices use a part of the system memory as the framebuffer, ==
  213. == typically mapped as uncached but with write-combining enabled.       ==
  214. == Writes to such framebuffers are quite fast, but reads are much       ==
  215. == slower and very sensitive to the alignment and the selection of      ==
  216. == CPU instructions which are used for accessing memory.                ==
  217. ==                                                                      ==
  218. == Many x86 systems allocate the framebuffer in the GPU memory,         ==
  219. == accessible for the CPU via a relatively slow PCI-E bus. Moreover,    ==
  220. == PCI-E is asymmetric and handles reads a lot worse than writes.       ==
  221. ==                                                                      ==
  222. == If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
  223. == or preferably >300 MB/s), then using the shadow framebuffer layer    ==
  224. == is not necessary in Xorg DDX drivers, resulting in a nice overall    ==
  225. == performance improvement. For example, the xf86-video-fbturbo DDX     ==
  226. == uses this trick.                                                     ==
  227. ==========================================================================
  228.  
  229.  NEON read (from framebuffer)                         :  12205.4 MB/s
  230.  NEON copy (from framebuffer)                         :   7045.1 MB/s (0.4%)
  231.  NEON 2-pass copy (from framebuffer)                  :   4654.7 MB/s (1.7%)
  232.  NEON unrolled copy (from framebuffer)                :   5721.0 MB/s
  233.  NEON 2-pass unrolled copy (from framebuffer)         :   3818.8 MB/s
  234.  VFP copy (from framebuffer)                          :   5733.0 MB/s
  235.  VFP 2-pass copy (from framebuffer)                   :   3524.9 MB/s (0.2%)
  236.  ARM copy (from framebuffer)                          :   7588.9 MB/s (0.1%)
  237.  ARM 2-pass copy (from framebuffer)                   :   3787.2 MB/s
  238.  
  239. ==========================================================================
  240. == Memory latency test                                                  ==
  241. ==                                                                      ==
  242. == Average time is measured for random memory accesses in the buffers   ==
  243. == of different sizes. The larger is the buffer, the more significant   ==
  244. == are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
  245. == accesses. For extremely large buffer sizes we are expecting to see   ==
  246. == page table walk with several requests to SDRAM for almost every      ==
  247. == memory access (though 64MiB is not nearly large enough to experience ==
  248. == this effect to its fullest).                                         ==
  249. ==                                                                      ==
  250. == Note 1: All the numbers are representing extra time, which needs to  ==
  251. ==         be added to L1 cache latency. The cycle timings for L1 cache ==
  252. ==         latency can be usually found in the processor documentation. ==
  253. == Note 2: Dual random read means that we are simultaneously performing ==
  254. ==         two independent memory accesses at a time. In the case if    ==
  255. ==         the memory subsystem can't handle multiple outstanding       ==
  256. ==         requests, dual random read has the same timings as two       ==
  257. ==         single reads performed one after another.                    ==
  258. ==========================================================================
  259.  
  260. block size : single random read / dual random read
  261.       1024 :    0.0 ns          /     0.1 ns
  262.       2048 :    0.0 ns          /     0.1 ns
  263.       4096 :    0.0 ns          /     0.1 ns
  264.       8192 :    0.0 ns          /     0.1 ns
  265.      16384 :    0.0 ns          /     0.0 ns
  266.      32768 :    0.0 ns          /     0.1 ns
  267.      65536 :    4.4 ns          /     6.9 ns
  268.     131072 :    6.7 ns          /     9.0 ns
  269.     262144 :    9.6 ns          /    12.0 ns
  270.     524288 :   11.0 ns          /    13.7 ns
  271.    1048576 :   12.0 ns          /    14.7 ns
  272.    2097152 :   20.2 ns          /    28.6 ns
  273.    4194304 :   89.5 ns          /   135.1 ns
  274.    8388608 :  125.3 ns          /   172.1 ns
  275.   16777216 :  143.6 ns          /   187.8 ns
  276.   33554432 :  158.0 ns          /   210.4 ns
  277.   67108864 :  166.9 ns          /   222.6 ns
  278.  
  279. Official Ubuntu Xenial Hardkernel image, 4.9.28, no ddr_freq changes:
  280.  
  281. root@odroid:/usr/local/src/tinymembench# taskset -c 4-7 ./tinymembench
  282. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  283.  
  284. ==========================================================================
  285. == Memory bandwidth tests                                               ==
  286. ==                                                                      ==
  287. == Note 1: 1MB = 1000000 bytes                                          ==
  288. == Note 2: Results for 'copy' tests show how many bytes can be          ==
  289. ==         copied per second (adding together read and writen           ==
  290. ==         bytes would have provided twice higher numbers)              ==
  291. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  292. ==         to first fetch data into it, and only then write it to the   ==
  293. ==         destination (source -> L1 cache, L1 cache -> destination)    ==
  294. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
  295. ==         brackets                                                     ==
  296. ==========================================================================
  297.  
  298.  C copy backwards                                     :   1178.8 MB/s
  299.  C copy backwards (32 byte blocks)                    :   1165.0 MB/s
  300.  C copy backwards (64 byte blocks)                    :   2397.9 MB/s
  301.  C copy                                               :   2574.3 MB/s
  302.  C copy prefetched (32 bytes step)                    :   2847.8 MB/s
  303.  C copy prefetched (64 bytes step)                    :   2916.8 MB/s
  304.  C 2-pass copy                                        :   1356.8 MB/s
  305.  C 2-pass copy prefetched (32 bytes step)             :   1619.3 MB/s
  306.  C 2-pass copy prefetched (64 bytes step)             :   1633.6 MB/s
  307.  C fill                                               :   4935.2 MB/s (1.0%)
  308.  C fill (shuffle within 16 byte blocks)               :   1845.1 MB/s
  309.  C fill (shuffle within 32 byte blocks)               :   1844.7 MB/s
  310.  C fill (shuffle within 64 byte blocks)               :   1927.9 MB/s
  311.  ---
  312.  standard memcpy                                      :   2316.3 MB/s
  313.  standard memset                                      :   4950.1 MB/s (1.1%)
  314.  ---
  315.  NEON read                                            :   3370.8 MB/s
  316.  NEON read prefetched (32 bytes step)                 :   4294.9 MB/s
  317.  NEON read prefetched (64 bytes step)                 :   4304.6 MB/s
  318.  NEON read 2 data streams                             :   3485.0 MB/s
  319.  NEON read 2 data streams prefetched (32 bytes step)  :   4478.6 MB/s
  320.  NEON read 2 data streams prefetched (64 bytes step)  :   4487.0 MB/s
  321.  NEON copy                                            :   2652.3 MB/s
  322.  NEON copy prefetched (32 bytes step)                 :   2953.8 MB/s (0.2%)
  323.  NEON copy prefetched (64 bytes step)                 :   2942.8 MB/s
  324.  NEON unrolled copy                                   :   2250.6 MB/s
  325.  NEON unrolled copy prefetched (32 bytes step)        :   3280.6 MB/s
  326.  NEON unrolled copy prefetched (64 bytes step)        :   3302.6 MB/s
  327.  NEON copy backwards                                  :   1226.8 MB/s
  328.  NEON copy backwards prefetched (32 bytes step)       :   1441.2 MB/s
  329.  NEON copy backwards prefetched (64 bytes step)       :   1440.6 MB/s
  330.  NEON 2-pass copy                                     :   2111.7 MB/s
  331.  NEON 2-pass copy prefetched (32 bytes step)          :   2251.2 MB/s
  332.  NEON 2-pass copy prefetched (64 bytes step)          :   2252.0 MB/s
  333.  NEON unrolled 2-pass copy                            :   1392.8 MB/s
  334.  NEON unrolled 2-pass copy prefetched (32 bytes step) :   1737.1 MB/s
  335.  NEON unrolled 2-pass copy prefetched (64 bytes step) :   1752.6 MB/s
  336.  NEON fill                                            :   4927.3 MB/s (0.8%)
  337.  NEON fill backwards                                  :   1857.4 MB/s
  338.  VFP copy                                             :   2485.4 MB/s
  339.  VFP 2-pass copy                                      :   1328.5 MB/s
  340.  ARM fill (STRD)                                      :   4930.7 MB/s (1.0%)
  341.  ARM fill (STM with 8 registers)                      :   4914.0 MB/s
  342.  ARM fill (STM with 4 registers)                      :   4941.0 MB/s (0.2%)
  343.  ARM copy prefetched (incr pld)                       :   3177.5 MB/s
  344.  ARM copy prefetched (wrap pld)                       :   2989.0 MB/s
  345.  ARM 2-pass copy prefetched (incr pld)                :   1700.1 MB/s
  346.  ARM 2-pass copy prefetched (wrap pld)                :   1670.9 MB/s
  347.  
  348. ==========================================================================
  349. == Memory latency test                                                  ==
  350. ==                                                                      ==
  351. == Average time is measured for random memory accesses in the buffers   ==
  352. == of different sizes. The larger is the buffer, the more significant   ==
  353. == are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
  354. == accesses. For extremely large buffer sizes we are expecting to see   ==
  355. == page table walk with several requests to SDRAM for almost every      ==
  356. == memory access (though 64MiB is not nearly large enough to experience ==
  357. == this effect to its fullest).                                         ==
  358. ==                                                                      ==
  359. == Note 1: All the numbers are representing extra time, which needs to  ==
  360. ==         be added to L1 cache latency. The cycle timings for L1 cache ==
  361. ==         latency can be usually found in the processor documentation. ==
  362. == Note 2: Dual random read means that we are simultaneously performing ==
  363. ==         two independent memory accesses at a time. In the case if    ==
  364. ==         the memory subsystem can't handle multiple outstanding       ==
  365. ==         requests, dual random read has the same timings as two       ==
  366. ==         single reads performed one after another.                    ==
  367. ==========================================================================
  368.  
  369. block size : single random read / dual random read
  370.       1024 :    0.0 ns          /     0.0 ns
  371.       2048 :    0.0 ns          /     0.0 ns
  372.       4096 :    0.0 ns          /     0.0 ns
  373.       8192 :    0.0 ns          /     0.0 ns
  374.      16384 :    0.0 ns          /     0.0 ns
  375.      32768 :    0.0 ns          /     0.0 ns
  376.      65536 :    4.4 ns          /     6.8 ns
  377.     131072 :    6.7 ns          /     9.0 ns
  378.     262144 :    9.6 ns          /    11.9 ns
  379.     524288 :   11.0 ns          /    13.6 ns
  380.    1048576 :   11.9 ns          /    14.5 ns
  381.    2097152 :   19.3 ns          /    27.6 ns
  382.    4194304 :   95.0 ns          /   143.0 ns
  383.    8388608 :  133.6 ns          /   181.7 ns
  384.   16777216 :  153.2 ns          /   196.7 ns
  385.   33554432 :  168.5 ns          /   216.5 ns
  386.   67108864 :  177.9 ns          /   231.5 ns
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
 
Top