printBenchmarks

Benchmarks one or more functions for speed assessment and comparison, and prints results as formatted text. A baseline timing that accounts for benchmarking overheads is kept along with the results and automatically deducted from all timings. A timing indistinguishable from the baseline looping overhead appears with a run time of zero and indicates a function that does too little work to be timed.

Measurement is done in epochs. For each function benchmarked, the smallest time is taken over all epochs.

Benchmark results report time per iteration.

void
printBenchmarks
(
funs...
)
(
File target = stdout
)

Parameters

funs

If one ore more funs are provided, they must come as pairs in which the first element is a string (the name of the benchmark) and the second element is the alias of a function (the actual benchmark). Each alias must refer to a function that takes either no arguments or one integral argument (which is the iterations count).

target File

File where output is printed.

Examples

printBenchmarks!(
	"file write", { std.file.write("/tmp/deleteme", "hello, world!"); },
	"file read",  { std.file.read("/tmp/deleteme"); })
	();

The example above outputs to stdout:

===============================================================================
Benchmark											  relative ns/iter  iter/s
===============================================================================
file write													   144.3K	6.9K
file read														 26.3K   38.0K
===============================================================================

With internal iteration, the results would be similar:

printBenchmarks!(
	"file write", { std.file.write("/tmp/deleteme", "hello, world!"); },
	"file read",  (uint n) {
		foreach (i; 0 .. n) std.file.read("/tmp/deleteme");
	})
	();

In the example above, the framework iterates the first lambda many times and collects timing information. For the second lambda, instead of doing the iteration, the framework simply passes increasing values of n to the lambda. In the end time per iteration is measured, so the performance profile (and the printout) would be virtually identical to the one in the previous example.

If the call to printBenchmarks does not provide a name for some benchmarks, the name of the benchmarked function is used. (For lambdas, the name is e.g. __lambda5.)

void benchmark_fileWrite()
{
	std.file.write("/tmp/deleteme", "hello, world!");
}
printBenchmarks!(
	benchmark_fileWrite,
	"file read", { std.file.read("/tmp/deleteme"); },
	{ std.file.read("/tmp/deleteme"); })
	();

The example above outputs to stdout:

===============================================================================
Benchmark											  relative ns/iter  iter/s
===============================================================================
fileWrite2()													  76.0K   13.2K
file read														 28.6K   34.9K
__lambda2														 27.8K   36.0K
===============================================================================

If the name of the benchmark starts with "benchmark_" or "benchmark_relative_", that prefix does not appear in the printed name. If the prefix is "benchmark_relative_", the "relative" column is filled in the output. The relative performance is computed relative to the last non-relative benchmark completed and expressed in either times (suffix "x") or percentage (suffix "%"). The percentage is, for example, 100.0% if benchmark runs at the same speed as the baseline, 200.0% if the benchmark is twice as fast as the baseline, and 50% if the benchmark is half as fast.

printBenchmarks!(
	"file write", { std.file.write("/tmp/deleteme", "hello, world!"); },
	"benchmark_relative_file read",  { std.file.read("/tmp/deleteme"); },
	"benchmark_relative_array creation",  { new char[32]; })
	();

This example has one baseline and two relative tests. The output looks as follows:

===============================================================================
Benchmark											  relative ns/iter  iter/s
===============================================================================
file write													   140.2K	7.1K
file read												517.8%   27.1K   36.9K
array creation										  1284.3%  116.0	 8.6M
===============================================================================

According to the data above, file reading is 5.178 times faster than file writing, whereas array creation is 1200 times faster than file writing.

If no functions are passed as funs, calling printBenchmarks prints the previously registered per-module benchmarks. Refer to scheduleForBenchmarking.

Meta