robertbira

Italian Translation Report: Node.js [Part 29 - 1456 words]

Aug 19th, 2018
95
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. How to Write and Run Benchmarks in Node.js Core
  2. Table of Contents
  3. Prerequisites
  4. HTTP Benchmark Requirements
  5. Benchmark Analysis Requirements
  6. Running benchmarks
  7. Running individual benchmarks
  8. Running all benchmarks
  9. Comparing Node.js versions
  10. Comparing parameters
  11. Running Benchmarks on the CI
  12. Creating a benchmark
  13. Basics of a benchmark
  14. Creating an HTTP benchmark
  15. Basic Unix tools are required for some benchmarks. Git for Windows includes Git Bash and the necessary tools, which need to be included in the global Windows
  16. Most of the HTTP benchmarks require a benchmarker to be installed.
  17. This can be either or
  18. is a Node.js script that can be installed using
  19. It will use the Node.js executable that is in the path.
  20. In order to compare two HTTP benchmark runs, make sure that the Node.js version in the path is not altered.
  21. may be available through one of the available package managers.
  22. If not, it can be easily built from source via
  23. By default, will be used as the benchmarker.
  24. If it is not available, will be used in its place.
  25. When creating an HTTP benchmark, the benchmarker to be used should be specified by providing it as an argument:
  26. To run the benchmarks, the benchmarker must be used.
  27. The tool is a component of the project and may be installed from or built from source.
  28. To analyze the results, should be installed.
  29. Use one of the available package managers or download it from
  30. The R packages and are also used and can be installed using the R REPL.
  31. In the event that a message is reported stating that a CRAN mirror must be selected first, specify a mirror by adding in the repo parameter.
  32. If we used the mirror, it could look something like this:
  33. Of course, use an appropriate mirror based on location.
  34. A list of mirrors is located here
  35. This can be useful for debugging a benchmark or doing a quick performance measure.
  36. But it does not provide the statistical information to make any conclusions about the performance.
  37. Individual benchmarks can be executed by simply executing the benchmark script with node.
  38. Each line represents a single benchmark with parameters specified as
  39. Each configuration combination is executed in a separate process.
  40. This ensures that benchmark results aren't affected by the execution order due to V8 optimizations.
  41. The last number is the rate of operations measured in ops/sec (higher is better).
  42. Furthermore a subset of the configurations can be specified, by setting them in the process arguments:
  43. Similar to running individual benchmarks, a group of benchmarks can be executed by using the tool
  44. To see how to use this script, run
  45. Again this does not provide the statistical information to make any conclusions.
  46. It is possible to execute more groups by adding extra process arguments.
  47. To compare the effect of a new Node.js version use the tool.
  48. This will run each benchmark multiple times, making it possible to calculate statistics on the performance measures.
  49. To see how to use this script, run
  50. As an example on how to check for a possible performance improvement, the pull request will be used as an example.
  51. This pull request claims to improve the performance of the module.
  52. First build two versions of Node.js, one from the master branch (here called) and another with the pull request applied (here called).
  53. To run multiple compiled versions in parallel you need to copy the output of the build:
  54. Check out the following example:
  55. The tool will then produce a csv file with the benchmark results.
  56. Tips: there are some useful options of
  57. For example, if you want to compare the benchmark of a single script instead of a whole module, you can use the option:
  58. For analysing the benchmark results use the tool.
  59. In the output, improvemen is the relative improvement of the new version, hopefully this is positive.
  60. confidence tells if there is enough statistical evidence to validate theimprovement.
  61. If there is enough evidence then there will be at least one star, more stars is just better.
  62. However if there are no stars, then don't make any conclusions based on the improvement. Sometimes this is fine, for example if no improvements are expected, then there shouldn't be any stars.
  63. A word of caution:Statistics is not a foolproof tool.
  64. If a benchmark shows a statistical significant difference, there is a 5% risk that this difference doesn't actually exist.
  65. For a single benchmark this is not an issue.
  66. But when considering 20 benchmarks it's normal that one of them will show significance, when it shouldn't.
  67. A possible solution is to instead consider at least two stars as the threshold, in that case the risk is 1%.
  68. If three stars is considered the risk is 0.1%.
  69. However this may require more runs to obtain (can be set with).
  70. For the statistically minded, the R script performs an, with the null hypothesis that the performance is the same for both versions.
  71. For the statistically minded, the R script performs an, with the null hypothesis that the performance is the same for both versions.
  72. The confidence field will show a star if the p-value is less than.
  73. The tool can also produce a box plot by using the option.
  74. In this case there are 48 different benchmark combinations, and there may be a need to filter the csv file.
  75. This can be done while benchmarking using the parameter or by filtering results afterwards using tools such as or.
  76. In the case be sure to keep the first line since that contains the header information.
  77. It can be useful to compare the performance for different parameters, for example to analyze the time complexity.
  78. To do this use the tool, this will run a benchmark multiple times and generate a csv with the results.
  79. To see how to use this script, run.
  80. After generating the csv, a comparison table can be created using the tool.
  81. Even more useful it creates an actual scatter plot when using the option.
  82. Because the scatter plot can only show two variables (in this case and) the rest is aggregated.
  83. Sometimes aggregating is a problem, this can be solved by filtering.
  84. This can be done while benchmarking using the parameter or by filtering results afterwards using tools such as or.
  85. In the case be sure to keep the first line since that contains the header information.
  86. To see the performance impact of a Pull Request by running benchmarks on the CI, check out How to: Running core benchmarks on.
  87. All benchmarks use the module.
  88. This contains the method which will setup the benchmark.
  89. The arguments of are:
  90. The benchmark function, where the code running operations and controlling timers should go
  91. The benchmark parameters.
  92. will run all possible combinations of these parameters, unless specified otherwise.
  93. Each configuration is a property with an array of possible values.
  94. Note that the configuration values can only be strings or numbers.
  95. The benchmark options.
  96. At the moment only the option for specifying command line flags is supported.
  97. returns a object, which is used for timing the runtime of the benchmark.
  98. Run after the initialization and when the benchmark is done.
  99. is the number of operations performed in the benchmark.
  100. The benchmark script will be run twice:
  101. The first pass will configure the benchmark with the combination of parameters specified in, and WILL NOT run the function.
  102. In this pass, no flags except the ones directly passed via commands when running the benchmarks will be used.
  103. In the second pass, the function will be run, and the process will be launched with:
  104. The flags passed into (the third argument)
  105. The flags in the command passed when the benchmark was run
  106. Beware that any code outside the function will be run twice in different processes.
  107. This could be troublesome if the code outside the function has side effects.
  108. In general, prefer putting the code inside the function if it's more than just declaration.
  109. Number of operations, specified here so they show up in the report.
  110. Most benchmarks just use one value for all runs.
  111. Custom configurations
  112. Add in order to require internal modules in main
  113. main and configs are required, options is optional.
  114. Note that any code outside main will be run twice, in different processes, with different command line arguments.
  115. only flags that have been passed to createBenchmark earlier when main is run will be in effect.
  116. In order to benchmark the internal modules, require them here.
  117. Start the timer Do operations here
  118. End the timer, pass in the number of operations
  119. The object returned by implements method.
  120. It can be used to run external tool to benchmark HTTP servers.
  121. Supported options keys are:
  122. number of concurrent connections to use, defaults to 100
  123. duration of the benchmark in seconds, defaults to 10
  124. benchmarker to use, defaults to
RAW Paste Data

Adblocker detected! Please consider disabling it...

We've detected AdBlock Plus or some other adblocking software preventing Pastebin.com from fully loading.

We don't have any obnoxious sound, or popup ads, we actively block these annoying types of ads!

Please add Pastebin.com to your ad blocker whitelist or disable your adblocking software.

×