SYNOPSIS See bencher CLI. DESCRIPTION EARLY WORK, LOTS OF UNIMPLEMENTED STUFFS. Bencher is a benchmark framework. It helps you: * specify what Perl code (functions/module names or coderefs) or external commands you want to benchmark along with a set of data (function or command-line arguments). * run the items You can run all the items, only some of them, with some/all combinations of arguments, with different module paths/versions, different perl paths, and so on. * save the result * display the result(s) and graph them * send the result to a server SCENARIO The core data structure that you need to prepare is the scenario. It is a DefHash (i.e. just a regular Perl hash), the two most important keys of this hash are: participants and datasets. An example scenario (from Bench::Scenario::Example): package Bencher::Scenario::Example; our $scenario = { participants => [ {fcall_template => q[Text::Wrap::wrap('', '', )]}, ], datasets => [ { name=>"foobar x100", args => {text=>"foobar " x 100} }, { name=>"foobar x1000", args => {text=>"foobar " x 1000} }, { name=>"foobar x10000", args => {text=>"foobar " x 10000} }, ], }; 1; participants participants (array) lists Perl code (or external command) that we want to benchmark. Instead of just list of coderefs like what Benchmark expects, you can use fcall_template instead. It is a string containing a function call code. From this value, Bencher can extract the name of the module and function used (and can help you load the modules, benchmark startup overhead of all involved modules, etc). It can also contain variables enclosed in angle brackets, like which will be replaced with actual data/value later. You can also add name key to a participant so you can refer to it more easily later, e.g.: participants => [ {name=>'pp', fcall_template=>'List::MoreUtils::PP::uniq(@{})'}, {name=>'xs', fcall_template=>'List::MoreUtils::XS::uniq(@{})'}, ], Aside from fcall_template, you can also use code_template (a string containing arbitrary code) or code (a subroutine reference, just like what you would provide to the Benchmark module). Other properties you can add to a participant: include_by_default (bool, default true, can be set to false if you want to exclude participant by default when running benchmark, unless the participant is explicitly included). Or, if you are benchmarking commands, you specify cmdline (array or strings, or strings) instead. An array cmdline will not use shell, while the string version will use shell. See Bencher::Scenario::Interpreters. * name (str) From DefHash. * summary (str) From DefHash. * description (str) From DefHash. * tags (array of str) From DefHash. Define tag(s) for this participant. Can be used to include/exclude groups of participants having the same tags. * module (str) * function (str) * fcall_template (str) * result_is_list (bool, default 0) datasets datasets (array) lists the function inputs (or command-line arguments). You can name each dataset too, to be able to refer to it more easily. Other properties you can add to a dataset: include_by_default (bool, default true, can be set to false if you want to exclude dataset by default when running benchmark, unless the dataset is explicitly included). * name (str) From DefHash. * summary (str) From DefHash. * description (str) From DefHash. * tags (array of str) From DefHash. Define tag(s) for this dataset. Can be used to include/exclude groups of datasets having the same tags. * args (hash) Example: {filename=>"ujang.txt", size=>10} You can supply multiple argument values by adding @ suffix to the argument name, example: {filename=>"ujang.txt", 'size@'=>[10, 100, 1000]} This means, for each participant mentioning size, three benchmark items will be generated, one for each value of size. * argv (array) * include_by_default (bool, default 1) * include_participant_tags (array of str) Only include participants having all these tags. * exclude_participant_tags (array of str) Exclude participants having any of these tags. Other properties Other known scenario properties (keys): * name From DefHash, scenario name (usually short and one word). * summary From DefHash, a one-line plaintext summary. * description (str) From DefHash, longer description in Markdown. * on_failure (str, "skip"|"die") The default is "die". When set to "skip", will first run the code of each item before benchmarking and trap command failure/Perl exception and if that happens, will "skip" the item. Can be overriden in the CLI with --on-failure option. * before_gen_items (code) If specified, then this code will be called before generating items. You can use this hook to, e.g.: generate datasets dynamically. Code will be given hash argument with the following keys: hook_name (str, set to before_gen_items), scenario, stash (hash, which you can use to pass data between hooks). * before_bench (code) If specified, then this code will be called before starting the benchmark. Code will be given hash argument with the following keys: hook_name (str, set to before_bench), scenario, stash. * after_bench (code) If specified, then this code will be called after completing benchmark. You can use this hook to, e.g.: do some custom formatting/modification to the result. Code will be given hash argument with the following keys: hook_name (str, set to before_bench), scenario, stash, result (array, enveloped result). * before_return (code) If specified, then this code will be called before displaying/returning the result. You can use this hook to, e.g.: modify the result in some way. Code will be given hash argument with the following keys: hook_name (str, set to before_bench), scenario, stash, result. SEE ALSO bencher BenchmarkAnything. There are lot of overlaps of goals between Bencher and this project. I hope to reuse or interoperate parts of BenchmarkAnything, e.g. storing Bencher results in a BenchmarkAnything storage backend, sending Bencher results to a BenchmarkAnything HTTP server, and so on. Benchmark, Benchmark::Dumb (Dumbbench) Bencher::Scenario::* for examples of scenarios.