operf-macro


A macro-benchmarking suite for OCaml


Online Resources

operf-macro on Github Latest sources in the official GIT repository
OPAM Repo An OPAM repository with some macro-benchmarks
ocaml-benchs on Github The sources of our set of macro-benchmarks

1. Introduction

operf-macro is a (macro)-benchmarking (i.e. whole program) suite for OCaml. It provides a framework to define, run, and measure metrics from such programs. Those include the elapsed time, the elapsed cycles and OCaml GC stats. The aim of macro-benchmarks is to measure the performance of the particular compiler that generated it. I can also be used to compare different versions of a particular program, or comparing the performance of several programs whose functionality is equivalent.

Contrary to micro-benchmarks that are OCaml functions of some parameter(s) representing the typical size of the problem (size of an array to iterate on, number of iterations of a loop, etc.), macro-benchmarks do generally not have a parameter. The other difference is that, as said above, they are whole OCaml programs as opposed to functions.

Eventually, the operf-macro framework will serve as an unification layer for presenting results from micro-benchmarks as well. Some tools are already available, for instance the injector program can import inline micro-benchmark results from the Jane Street Core library into the operf-macro framework.

For now, it is however safer to stick with the micro-benchmarking tools already available like core_bench or operf-micro. An interesting read about the core_bench library can be found in the Jane Street OCaml blog..

The other important thing to have in mind from the start is that operf-macro is highly integrated into OPAM:

Although there are means to bypass this design rule, it is probably easier to stick to it. Some pointers will be given in the Usage section regarding running independent benchmarks. A method will also be given to transform any OCaml installation into an OPAM switch.

2. Installation

You need OPAM version 1.2.

$ opam repo add operf-macro git:
$ (optionally) opam install core async async_smtp core_bench
$ opam install operf-macro all-bench

You can install all, some, or no packages listed in the second lines. They are optional dependencies to some benchmarks.

The last line installs all-bench, a meta-package that will always depend on all the available benchmarks.

3. Basic usage

The operf-macro benchmark will install an executable named operf-macro. This is the single entry-point to the framework and all functionality derives from it. It is a CLI program, using cmdliner. You can therefore easily obtain help on the possible commands directly through it. We give here only some tips to begin.

3.1. Listing available benchmarks

$ operf-macro list "4.01*"

where [glob]* is any number of arguments that will be treated as a glob (shell) pattern for a complier version. In this case, all installed benchmarks for available compiler switches whose name starts by "4.01" will be printed on screen.

3.2. Running benchmarks

$ operf-macro run

This will run all benchmarks installed in the OPAM switch you are currently in, and gather the results in .cache/operf/macro/<benchmark>/. You can always interrupt the program during execution: successfully executed benchmarks' results will be saved. Alternatively, you can use either

$ operf-macro run [bench_names_glob]*
$ operf-macro run --skip [bench_names_glob]*

to run only a selection of benchmarks. It will include (resp. exclude) the selected benchmarks and only those.

3.3. Obtaining results

Raw data

operf-macro stores its results in ~/.cache/operf/macro. Here you will find one directory per benchmark, and inside, one .result file per compiler. Inside the file you will find a s-expression that is the serialized version of operf-macro's Result value. This include mostly the individual measurements per execution, such as real time, cycles, and so on.

Summaries

Use:

$ operf-macro summarize

This will print a dump of the database of all operf-macro's results, as an s-expression, on your screen, but before that, it will create a .summary file for each .result file (see previous section) found in ~/.cache/operf/macro, in the same directory.

Result as .csv files to feed your favourite plotting program

$ operf-macro summarize -b csv -t <topic> [-s compiler1,...,compilerN] [benchmark1 ... benchmarkN]

This will print a CSV array of requested benchmarks (or all benchmarks if no benchmarks are specified) for the specified switches (or all switches if not specified). If you don't specify a topic (-t) option, the output will contain n arrays, one per topic, separated by a newline.

Visualizing the results

If you have a recent version of gnuplot compiled with its Qt backend installed, you can replace csv by qt in the example above (you need to specify a topic then). This will launch gnuplot in a window and will display the CSV array as a bar chart. You can use the -o argument to export the gnuplot .gnu file.

4. Advanced Usage

4.1. Writing benchmarks

.bench file format

Benchmark descriptions must be stored in files with the extension .bench. The format used is an S-expression matching the internal Benchmark.t type:

 type speed = [`Fast | `Slow | `Slower] with sexp
  type t = {
    name: string;
    descr: string with default("");
    cmd: string list;
    cmd_check: string list with default([]);
    env: string list option with default(None);
    speed: speed with default(`Fast);
    timeout: int with default(600);
    weight: float with default(1.);
    discard: [`Stdout | `Stderr] list with default([]);
  } with sexp

4.2. Running Your New Benchmark

operf-macro

Use:

$ operf-macro perf /path/to/exe arg1 .. argn --batch

This will perform a benchmark of the program specified in the commandline and print the result as an s-expression in stdout. This s-expression include an inner s-expression describing the benchmark source, and this is your benchmark description. Write this in a .bench file.

opam

Another way is to create an opam package and inform operf-macro about it. A good idea is to look at the packages in our benchmarks repo to have an easier time packaging your benchmark.

5. Time information

Name Occurences Expected Time (secondes)
bdd 16 9
kb 6 6
sequence 414 <1
sequence-cps 168 <1
async_rpc 34 <1
coq 2 760
coq-pwith 13 160
g2pp 6 255
jsontrip-actionLabel 250 <1
jsontrip-sample 7 <1
gdump-sample 44 <1
gdump-actionLabel 217 <1
async-echo 15 10
core-seq 304 <1
core-cps 235 6
patdiff 40 32
alt-ergo 5 702
cohttp-async 54 <1
cohttp-lwt 307 <1
js-of-ocaml 8 149
sauvola 5 72
valet-lwt 5 130
valet-async 6 159

6. FAQ

I want operf-macro to measure GC stats for my program!

Please add at the end of your benchmark:

 try
    let fn = Sys.getenv "OCAML_GC_STATS" in
    let oc = open_out fn in
    Gc.print_stat oc
  with _ -> ()