pub struct Criterion { /* private fields */ }
Expand description
The benchmark manager
Criterion
lets you configure and execute benchmarks
Each benchmark consists of four phases:
- Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
- Measurement: The routine is repeatedly executed, and timing information is collected into a sample
- Analysis: The sample is analyzed and distiled into meaningful statistics that get reported to stdout, stored in files, and plotted
- Comparison: The current sample is compared with the sample obtained in the previous benchmark.
Implementations§
Source§impl Criterion
impl Criterion
Sourcepub fn sample_size(self, n: usize) -> Criterion
pub fn sample_size(self, n: usize) -> Criterion
Changes the default size of the sample for benchmarks run with this runner.
A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.
Sample size must be at least 2.
§Panics
Panics if set to zero or one
Sourcepub fn warm_up_time(self, dur: Duration) -> Criterion
pub fn warm_up_time(self, dur: Duration) -> Criterion
Changes the default warm up time for benchmarks run with this runner.
§Panics
Panics if the input duration is zero
Sourcepub fn measurement_time(self, dur: Duration) -> Criterion
pub fn measurement_time(self, dur: Duration) -> Criterion
Changes the default measurement time for benchmarks run with this runner.
With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs
Note: If the measurement time is too “low”, Criterion will automatically increase it
§Panics
Panics if the input duration in zero
Sourcepub fn nresamples(self, n: usize) -> Criterion
pub fn nresamples(self, n: usize) -> Criterion
Changes the default number of resamples for benchmarks run with this runner.
Number of resamples to use for the bootstrap
A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time
§Panics
Panics if the number of resamples is set to zero
Sourcepub fn noise_threshold(self, threshold: f64) -> Criterion
pub fn noise_threshold(self, threshold: f64) -> Criterion
Changes the default noise threshold for benchmarks run with this runner.
This threshold is used to decide if an increase of X%
in the execution time is considered
significant or should be flagged as noise
Note: A value of 0.02
is equivalent to 2%
§Panics
Panics is the threshold is set to a negative value
Sourcepub fn confidence_level(self, cl: f64) -> Criterion
pub fn confidence_level(self, cl: f64) -> Criterion
Changes the default confidence level for benchmarks run with this runner
The confidence level is used to calculate the confidence intervals of the estimated statistics
§Panics
Panics if the confidence level is set to a value outside the (0, 1)
range
Sourcepub fn significance_level(self, sl: f64) -> Criterion
pub fn significance_level(self, sl: f64) -> Criterion
Changes the default significance level for benchmarks run with this runner
The significance level is used for hypothesis testing
§Panics
Panics if the significance level is set to a value outside the (0, 1)
range
Sourcepub fn with_plots(self) -> Criterion
pub fn with_plots(self) -> Criterion
Enables plotting
Sourcepub fn without_plots(self) -> Criterion
pub fn without_plots(self) -> Criterion
Disables plotting
Sourcepub fn save_baseline(self, baseline: String) -> Criterion
pub fn save_baseline(self, baseline: String) -> Criterion
Names an explicit baseline and enables overwriting the previous results.
Sourcepub fn retain_baseline(self, baseline: String) -> Criterion
pub fn retain_baseline(self, baseline: String) -> Criterion
Names an explicit baseline and disables overwriting the previous results.
Sourcepub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion
pub fn with_filter<S: Into<String>>(self, filter: S) -> Criterion
Filters the benchmarks. Only benchmarks with names that contain the given string will be executed.
Sourcepub fn configure_from_args(self) -> Criterion
pub fn configure_from_args(self) -> Criterion
Configure this criterion struct based on the command-line arguments to this process.
Sourcepub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion
pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion
Benchmarks a function
§Example
fn bench(c: &mut Criterion) {
// Setup (construct data, allocate memory, etc)
c.bench_function(
"function_name",
|b| b.iter(|| {
// Code to benchmark goes here
}),
);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Sourcepub fn bench_functions<I>(
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: I,
) -> &mut Criterionwhere
I: Debug + 'static,
pub fn bench_functions<I>(
&mut self,
id: &str,
funs: Vec<Fun<I>>,
input: I,
) -> &mut Criterionwhere
I: Debug + 'static,
Benchmarks multiple functions
All functions get the same input and are compared with the other implementations.
Works similar to bench_function
, but with multiple functions.
§Example
fn bench_seq_fib(b: &mut Bencher, i: &u32) {
b.iter(|| {
seq_fib(i);
});
}
fn bench_par_fib(b: &mut Bencher, i: &u32) {
b.iter(|| {
par_fib(i);
});
}
fn bench(c: &mut Criterion) {
let sequential_fib = Fun::new("Sequential", bench_seq_fib);
let parallel_fib = Fun::new("Parallel", bench_par_fib);
let funs = vec![sequential_fib, parallel_fib];
c.bench_functions("Fibonacci", funs, 14);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Sourcepub fn bench_function_over_inputs<I, F>(
&mut self,
id: &str,
f: F,
inputs: I,
) -> &mut Criterion
pub fn bench_function_over_inputs<I, F>( &mut self, id: &str, f: F, inputs: I, ) -> &mut Criterion
Benchmarks a function under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
§Example
fn bench(c: &mut Criterion) {
c.bench_function_over_inputs("from_elem",
|b: &mut Bencher, size: &usize| {
b.iter(|| vec![0u8; *size]);
},
vec![1024, 2048, 4096]
);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Sourcepub fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion
👎Deprecated since 0.2.6: External program benchmarks were rarely used and are awkward to maintain, so they are scheduled for deletion in 0.3.0
pub fn bench_program(&mut self, id: &str, program: Command) -> &mut Criterion
Benchmarks an external program
The external program must:
- Read the number of iterations from stdin
- Execute the routine to benchmark that many times
- Print the elapsed time (in nanoseconds) to stdout
// Example of an external program that implements this protocol
fn main() {
let stdin = io::stdin();
let ref mut stdin = stdin.lock();
// For each line in stdin
for line in stdin.lines() {
// Parse line as the number of iterations
let iters: u64 = line.unwrap().trim().parse().unwrap();
// Setup
// Benchmark
let start = Instant::now();
// Execute the routine "iters" times
for _ in 0..iters {
// Code to benchmark goes here
}
let elapsed = start.elapsed();
// Teardown
// Report elapsed time in nanoseconds to stdout
println!("{}", elapsed.to_nanos());
}
}
Sourcepub fn bench_program_over_inputs<I, F>(
&mut self,
id: &str,
program: F,
inputs: I,
) -> &mut Criterion
👎Deprecated since 0.2.6: External program benchmarks were rarely used and are awkward to maintain, so they are scheduled for deletion in 0.3.0
pub fn bench_program_over_inputs<I, F>( &mut self, id: &str, program: F, inputs: I, ) -> &mut Criterion
Benchmarks an external program under various inputs
This is a convenience method to execute several related benchmarks. Each benchmark will
receive the id: ${id}/${input}
.
Sourcepub fn bench<B: BenchmarkDefinition>(
&mut self,
group_id: &str,
benchmark: B,
) -> &mut Criterion
pub fn bench<B: BenchmarkDefinition>( &mut self, group_id: &str, benchmark: B, ) -> &mut Criterion
Executes the given benchmark. Use this variant to execute benchmarks with complex configuration. This can be used to compare multiple functions, execute benchmarks with custom configuration settings and more. See the Benchmark and ParameterizedBenchmark structs for more information.
fn bench(c: &mut Criterion) {
// Setup (construct data, allocate memory, etc)
c.bench(
"routines",
Benchmark::new("routine_1", |b| b.iter(|| routine_1()))
.with_function("routine_2", |b| b.iter(|| routine_2()))
.sample_size(50)
);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Trait Implementations§
Source§impl Default for Criterion
impl Default for Criterion
Source§fn default() -> Criterion
fn default() -> Criterion
Creates a benchmark manager with the following default settings:
- Sample size: 100 measurements
- Warm-up time: 3 s
- Measurement time: 5 s
- Bootstrap size: 100 000 resamples
- Noise threshold: 0.01 (1%)
- Confidence level: 0.95
- Significance level: 0.05
- Plotting: enabled (if gnuplot is available)
- No filter