diff libc/utils/benchmarks/README.md @ 173:0572611fdcc8 llvm10 llvm12

reorgnization done
author Shinji KONO <kono@ie.u-ryukyu.ac.jp>
date Mon, 25 May 2020 11:55:54 +0900
parents 1d019706d866
children
line wrap: on
line diff
--- a/libc/utils/benchmarks/README.md	Mon May 25 11:50:15 2020 +0900
+++ b/libc/utils/benchmarks/README.md	Mon May 25 11:55:54 2020 +0900
@@ -18,6 +18,7 @@
 apt-get install python3-pip
 pip3 install matplotlib scipy numpy
 ```
+You may need `python3-gtk` or similar package for displaying benchmark results.
 
 To get good reproducibility it is important to make sure that the system runs in
 `performance` mode. This is achieved by running:
@@ -38,6 +39,26 @@
 make -C /tmp/build -j display-libc-memcpy-benchmark-small
 ```
 
+The display target will attempt to open a window on the machine where you're
+running the benchmark. If this may not work for you then you may want `render`
+or `run` instead as detailed below.
+
+## Benchmarking targets
+
+The benchmarking process occurs in two steps:
+
+1. Benchmark the functions and produce a `json` file
+2. Display (or renders) the `json` file
+
+Targets are of the form `<action>-libc-<function>-benchmark-<configuration>`
+
+ - `action` is one of :
+    - `run`, runs the benchmark and writes the `json` file
+    - `display`, displays the graph on screen
+    - `render`, renders the graph on disk as a `png` file
+ - `function` is one of : `memcpy`, `memcmp`, `memset`
+ - `configuration` is one of : `small`, `big`
+
 ## Benchmarking regimes
 
 Using a profiler to observe size distributions for calls into libc functions, it
@@ -62,22 +83,6 @@
 _<sup>1</sup> - The size refers to the size of the buffers to compare and not
 the number of bytes until the first difference._
 
-## Benchmarking targets
-
-The benchmarking process occurs in two steps:
-
-1. Benchmark the functions and produce a `json` file
-2. Display (or renders) the `json` file
-
-Targets are of the form `<action>-libc-<function>-benchmark-<configuration>`
-
- - `action` is one of :
-    - `run`, runs the benchmark and writes the `json` file
-    - `display`, displays the graph on screen
-    - `render`, renders the graph on disk as a `png` file
- - `function` is one of : `memcpy`, `memcmp`, `memset`
- - `configuration` is one of : `small`, `big`
-
 ## Superposing curves
 
 It is possible to **merge** several `json` files into a single graph. This is