Reference
Contents
Index
Base.join
BenchmarkProfiles.performance_profile
SolverBenchmark.LTXformat
SolverBenchmark.MDformat
SolverBenchmark.bmark_results_to_dataframes
SolverBenchmark.bmark_solvers
SolverBenchmark.count_unique
SolverBenchmark.format_table
SolverBenchmark.gradient_highlighter
SolverBenchmark.judgement_results_to_dataframes
SolverBenchmark.latex_table
SolverBenchmark.load_stats
SolverBenchmark.markdown_table
SolverBenchmark.passfail_highlighter
SolverBenchmark.passfail_latex_highlighter
SolverBenchmark.pretty_latex_stats
SolverBenchmark.pretty_stats
SolverBenchmark.profile_package
SolverBenchmark.profile_solvers
SolverBenchmark.profile_solvers
SolverBenchmark.quick_summary
SolverBenchmark.safe_latex_AbstractFloat
SolverBenchmark.safe_latex_AbstractFloat_col
SolverBenchmark.safe_latex_AbstractString
SolverBenchmark.safe_latex_AbstractString_col
SolverBenchmark.safe_latex_Signed
SolverBenchmark.safe_latex_Signed_col
SolverBenchmark.safe_latex_Symbol
SolverBenchmark.safe_latex_Symbol_col
SolverBenchmark.save_stats
SolverBenchmark.solve_problems
SolverBenchmark.to_gist
SolverBenchmark.to_gist
Base.join
— Methoddf = join(stats, cols; kwargs...)
Join a dictionary of DataFrames given by stats
. Column :id
is required in all DataFrames. The resulting DataFrame will have column id
and all columns cols
for each solver.
Inputs:
stats::Dict{Symbol,DataFrame}
: Dictionary of DataFrames per solver. Each key is a different solver;cols::Array{Symbol}
: Which columns of the DataFrames.
Keyword arguments:
invariant_cols::Array{Symbol,1}
: Invariant columns to be added, i.e., columns that don't change depending on the solver (such as name of problem, number of variables, etc.);hdr_override::Dict{Symbol,String}
: Override header names.
Output:
df::DataFrame
: Resulting dataframe.
BenchmarkProfiles.performance_profile
— Methodperformance_profile(stats, cost, args...; b = PlotsBackend(), kwargs...)
Produce a performance profile comparing solvers in stats
using the cost
function.
Inputs:
stats::Dict{Symbol,DataFrame}
: pairs of:solver => df
;cost::Function
: cost function applyed to eachdf
. Should return a vector with the cost of solving the problem at each row;- 0 cost is not allowed;
- If the solver did not solve the problem, return Inf or a negative number.
b::BenchmarkProfiles.AbstractBackend
: backend used for the plot.
Examples of cost functions:
cost(df) = df.elapsed_time
: Simpleelapsed_time
cost. Assumes the solver solved the problem.cost(df) = (df.status .!= :first_order) * Inf + df.elapsed_time
: Takes into consideration the status of the solver.
SolverBenchmark.LTXformat
— FunctionLTXformat(x)
Format x
according to its type. For types Signed
, AbstractFloat
, AbstractString
and Symbol
, use a predefined formatting string passed to @sprintf
and then the corresponding safe_latex_<type>
function.
For type Missing
, return "NA".
SolverBenchmark.MDformat
— FunctionMDformat(x)
Format x
according to its type. For types Signed
, AbstractFloat
, AbstractString
and Symbol
, use a predefined formatting string passed to @sprintf
.
For type Missing
, return "NA".
SolverBenchmark.bmark_results_to_dataframes
— Methodstats = bmark_results_to_dataframes(results)
Convert PkgBenchmark
results to a dictionary of DataFrame
s. The benchmark SUITE should have been constructed in the form
SUITE[solver][case] = ...
where solver
will be recorded as one of the solvers to be compared in the DataFrame and case is a test case. For example:
SUITE["CG"]["BCSSTK09"] = @benchmarkable ...
SUITE["LBFGS"]["ROSENBR"] = @benchmarkable ...
Inputs:
results::BenchmarkResults
: the result ofPkgBenchmark.benchmarkpkg
Output:
stats::Dict{Symbol,DataFrame}
: a dictionary ofDataFrame
s containing the benchmark results per solver.
SolverBenchmark.bmark_solvers
— Methodbmark_solvers(solvers :: Dict{Symbol,Any}, args...; kwargs...)
Run a set of solvers on a set of problems.
Arguments
solvers
: a dictionary of solvers to which each problem should be passed- other positional arguments accepted by
solve_problems
, except for a solver name
Keyword arguments
Any keyword argument accepted by solve_problems
Return value
A Dict{Symbol, AbstractExecutionStats} of statistics.
SolverBenchmark.count_unique
— Methodvals = count_unique(X)
Count the number of occurrences of each value in X
.
Arguments
X
: an iterable.
Return value
A Dict{eltype(X),Int}
whose keys are the unique elements in X
and values are their number of occurrences.
Example: the snippet
stats = load_stats("mystats.jld2")
for solver ∈ keys(stats)
@info "$solver statuses" count_unique(stats[solver].status)
end
displays the number of occurrences of each final status for each solver in stats
.
SolverBenchmark.format_table
— Methodformat_table(df, formatter, kwargs...)
Format the data frame into a table using formatter
. Used by other table functions.
Inputs:
df::DataFrame
: Dataframe of a solver. Each row is a problem.formatter::Function
: A function that formats its input according to its type. SeeLTXformat
orMDformat
for examples.
Keyword arguments:
cols::Array{Symbol}
: Which columns of thedf
. Defaults to using all columns;ignore_missing_cols::Bool
: Iftrue
, filters out the columns incols
that don't exist in the data frame. Useful when creating tables for solvers in a loop where one solver has a column the other doesn't. Iffalse
, throwsBoundsError
in that situation.fmt_override::Dict{Symbol,Function}
: Overrides format for a specific column, such asfmt_override=Dict(:name => x->@sprintf("%-10s", x))
hdr_override::Dict{Symbol,String}
: Overrides header names, such ashdr_override=Dict(:name => "Name")
.
Outputs:
header::Array{String,1}
: header vector.table::Array{String,2}
: formatted table.
SolverBenchmark.gradient_highlighter
— Methodhl = gradient_highlighter(df, col; cmap=:coolwarm)
A PrettyTables highlighter the applies a color gradient to the values in columns given by cols
.
Input Arguments
df::DataFrame
dataframe to which the highlighter will be applied;col::Symbol
a symbol to indicate which column the highlighter will be applied to.
Keyword Arguments
cmap::Symbol
color scheme to use, from ColorSchemes.
SolverBenchmark.judgement_results_to_dataframes
— Methodstats = judgement_results_to_dataframes(judgement)
Convert BenchmarkJudgement
results to a dictionary of DataFrame
s.
Inputs:
judgement::BenchmarkJudgement
: the result of, e.g.,commit = benchmarkpkg(mypkg) # benchmark a commit or pull request main = benchmarkpkg(mypkg, "main") # baseline benchmark judgement = judge(commit, main)
Output:
stats::Dict{Symbol,Dict{Symbol,DataFrame}}
: a dictionary ofDict{Symbol,DataFrame}
s containing the target and baseline benchmark results. The elements of this dictionary are the same as those returned bybmark_results_to_dataframes(main)
andbmark_results_to_dataframes(commit)
.
SolverBenchmark.latex_table
— Methodlatex_table(io, df, kwargs...)
Create a latex longtable of a DataFrame using LaTeXTabulars, and format the output for a publication-ready table.
Inputs:
io::IO
: where to send the table, e.g.:open("file.tex", "w") do io latex_table(io, df) end
If left out,
io
defaults tostdout
.df::DataFrame
: Dataframe of a solver. Each row is a problem.
Keyword arguments:
cols::Array{Symbol}
: Which columns of thedf
. Defaults to using all columns;ignore_missing_cols::Bool
: Iftrue
, filters out the columns incols
that don't exist in the data frame. Useful when creating tables for solvers in a loop where one solver has a column the other doesn't. Iffalse
, throwsBoundsError
in that situation.fmt_override::Dict{Symbol,Function}
: Overrides format for a specific column, such asfmt_override=Dict(:name => x->@sprintf("\textbf{%s}", x) |> safe_latex_AbstractString)`
hdr_override::Dict{Symbol,String}
: Overrides header names, such ashdr_override=Dict(:name => "Name")
, where LaTeX escaping should be used if necessary.
We recommend using the safe_latex_foo
functions when overriding formats, unless you're sure you don't need them.
SolverBenchmark.load_stats
— Methodstats = load_stats(filename; kwargs...)
Arguments
filename::AbstractString
: the input file name.
Keyword arguments
key::String="stats"
: the key under which the data can be read infilename
. The key should be the same as the one used whensave_stats
was called.
Return value
A Dict{Symbol,DataFrame}
containing the statistics stored in file filename
. The user should import DataFrames
before calling load_stats
.
SolverBenchmark.markdown_table
— Methodmarkdown_table(io, df, kwargs...)
Create a markdown table from a DataFrame using PrettyTables and format the output.
Inputs:
io::IO
: where to send the table, e.g.:open("file.md", "w") do io markdown_table(io, df) end
If left out,
io
defaults tostdout
.df::DataFrame
: Dataframe of a solver. Each row is a problem.
Keyword arguments:
hl
: a highlighter or tuple of highlighters to color individual cells (when output to screen). By default, we use a simplepassfail_highlighter
.all other keyword arguments are passed directly to
format_table
.
SolverBenchmark.passfail_highlighter
— Functionhl = passfail_highlighter(df, c=crayon"bold red")
A PrettyTables highlighter that colors failures in bold red by default.
Input Arguments
df::DataFrame
dataframe to which the highlighter will be applied.df
must have theid
column.
If df
has the :status
property, the highlighter will be applied to rows for which df.status
indicates a failure. A failure is any status different from :first_order
or :unbounded
.
SolverBenchmark.passfail_latex_highlighter
— Functionhl = passfail_latex_highlighter(df)
A PrettyTables LaTeX highlighter that colors failures in bold red by default.
See the documentation of passfail_highlighter
for more information.
SolverBenchmark.pretty_latex_stats
— Methodpretty_latex_stats(df; kwargs...)
Pretty-print a DataFrame as a LaTeX longtable using PrettyTables.
See the pretty_stats
documentation. Specific settings in this method are:
- the backend is set to
:latex
; - the table type is set to
:longtable
; - highlighters, if any, should be LaTeX highlighters.
See the PrettyTables documentation for more information.
SolverBenchmark.pretty_stats
— Methodpretty_stats(df; kwargs...)
Pretty-print a DataFrame using PrettyTables.
Arguments
io::IO
: an IO stream to which the table will be output (default:stdout
);df::DataFrame
: the DataFrame to be displayed. If only certain columns ofdf
should be displayed, they should be extracted explicitly, e.g., by passingdf[!, [:col1, :col2, :col3]]
.
Keyword Arguments
col_formatters::Dict{Symbol, String}
: a Dict of format strings to apply to selected columns ofdf
. The keys ofcol_formatters
should be symbols, so that specific formatting can be applied to specific columns. By default,default_formatters
is used, based on the column type. If PrettyTables formatters are passed using theformatters
keyword argument, they are applied before those incol_formatters
.hdr_override::Dict{Symbol, String}
: a Dict of those headers that should be displayed differently than simply according to the column name (default: empty). Example:Dict(:col1 => "column 1")
.
All other keyword arguments are passed directly to pretty_table
. In particular,
- use
tf=tf_markdown
to display a Markdown table; - do not use this function for LaTeX output; use
pretty_latex_stats
instead; - any PrettyTables highlighters can be given, but see the predefined
passfail_highlighter
andgradient_highlighter
.
SolverBenchmark.profile_package
— Methodp = profile_package(judgement)
Produce performance profiles based on PkgBenchmark.BenchmarkJudgement
results.
Inputs:
judgement::BenchmarkJudgement
: the result of, e.g.,commit = benchmarkpkg(mypkg) # benchmark a commit or pull request main = benchmarkpkg(mypkg, "main") # baseline benchmark judgement = judge(commit, main)
SolverBenchmark.profile_solvers
— Methodp = profile_solvers(stats, costs, costnames;
width = 400, height = 400,
b = PlotsBackend(), kwargs...)
Produce performance profiles comparing solvers
based on the data in stats
.
Inputs:
stats::Dict{Symbol,DataFrame}
: a dictionary ofDataFrame
s containing the benchmark results per solver (e.g., produced bybmark_results_to_dataframes()
)costs::Vector{Function}
: a vector of functions specifying the measures to use in the profilescostnames::Vector{String}
: names to be used as titles of the profiles.
Keyword inputs:
width::Int
: Width of each individual plot (Default: 400)height::Int
: Height of each individual plot (Default: 400)b::BenchmarkProfiles.AbstractBackend
: backend used for the plot.
Additional kwargs
are passed to the plot
call.
Output: A Plots.jl plot representing a set of performance profiles comparing the solvers. The set contains performance profiles comparing all the solvers together on the measures given in costs
. If there are more than two solvers, additional profiles are produced comparing the solvers two by two on each cost measure.
SolverBenchmark.profile_solvers
— Methodp = profile_solvers(results)
Produce performance profiles based on PkgBenchmark.benchmarkpkg
results.
Inputs:
results::BenchmarkResults
: the result ofPkgBenchmark.benchmarkpkg
.
SolverBenchmark.quick_summary
— Methodstatuses, avgs = quick_summary(stats; kwargs...)
Call count_unique
and compute a few average measures for each solver in stats
.
Arguments
stats::Dict{Symbol,DataFrame}
: benchmark statistics such as returned bybmark_solvers
.
Keyword arguments
cols::Vector{Symbol}
: symbols indicatingDataFrame
columns in solver statistics for which we compute averages. Default:[:iter, :neval_obj, :neval_grad, :neval_hess, :neval_hprod, :elapsed_time]
.
Return value
statuses::Dict{Symbol,Dict{Symbol,Int}}
: a dictionary of number of occurrences of each final status for each solver instats
. Each value in this dictionary is returned bycount_unique
avgs::Dict{Symbol,Dict{Symbol,Float64}}
: a dictionary that contains averages of performance measures across all problems for each solver. Eachavgs[solver]
is aDict{Symbol,Float64}
where the measures are those given in the keyword argumentcols
and values are averages of those measures across all problems.
Example: the snippet
statuses, avgs = quick_summary(stats)
for solver ∈ keys(stats)
@info "statistics for" solver statuses[solver] avgs[solver]
end
displays quick summary and averages for each solver.
SolverBenchmark.safe_latex_AbstractFloat
— Methodsafe_latex_AbstractFloat(s::AbstractString)
Format the string representation of floats for output in a LaTeX table. Replaces infinite values with the \infty
LaTeX sequence. If the float is represented in exponential notation, the mantissa and exponent are wrapped in math delimiters. Otherwise, the entire float is wrapped in math delimiters.
SolverBenchmark.safe_latex_AbstractFloat_col
— Methodsafe_latex_AbstractFloat_col(col::Integer)
Generate a PrettyTables LaTeX formatter for real numbers.
SolverBenchmark.safe_latex_AbstractString
— Methodsafe_latex_AbstractString(s::AbstractString)
Format a string for output in a LaTeX table. Escapes underscores.
SolverBenchmark.safe_latex_AbstractString_col
— Methodsafe_latex_AbstractString_col(col:::Integer)
Generate a PrettyTables LaTeX formatter for strings. Replaces _
with \_
.
SolverBenchmark.safe_latex_Signed
— Methodsafe_latex_Signed(s::AbstractString)
Format the string representation of signed integers for output in a LaTeX table. Encloses s
in \(
and \)
.
SolverBenchmark.safe_latex_Signed_col
— Methodsafe_latex_Signed_col(col::Integer)
Generate a PrettyTables LaTeX formatter for signed integers.
SolverBenchmark.safe_latex_Symbol
— Methodsafe_latex_Symbol(s)
Format a symbol for output in a LaTeX table. Calls safe_latex_AbstractString(string(s))
.
SolverBenchmark.safe_latex_Symbol_col
— Methodsafe_latex_Symbol_col(col::Integer)
Generate a PrettyTables LaTeX formatter for symbols.
SolverBenchmark.save_stats
— Methodsave_stats(stats, filename; kwargs...)
Write the benchmark statistics stats
to a file named filename
.
Arguments
stats::Dict{Symbol,DataFrame}
: benchmark statistics such as returned bybmark_solvers
filename::AbstractString
: the output file name.
Keyword arguments
force::Bool=false
: whether to overwritefilename
if it already existskey::String="stats"
: the key under which the data can be read fromfilename
later.
Return value
This method returns an error if filename
exists and force==false
. On success, it returns the value of jldopen(filename, "w")
.
SolverBenchmark.solve_problems
— Methodsolve_problems(solver, solver_name, problems; kwargs...)
Apply a solver to a set of problems.
Arguments
solver
: the function name of a solver;solver_name
: name of the solver;problems
: the set of problems to pass to the solver, as an iterable ofAbstractNLPModel
. It is recommended to use a generator expression (necessary for CUTEst problems).
Keyword arguments
solver_logger::AbstractLogger
: logger wrapping the solver call (default:NullLogger
);reset_problem::Bool
: reset the problem's counters before solving (default:true
);skipif::Function
: function to be applied to a problem and return whether to skip it (default:x->false
);colstats::Vector{Symbol}
: summary statistics for the logger to output during the
benchmark (default: [:name, :nvar, :ncon, :status, :elapsed_time, :objective, :dual_feas, :primal_feas]
);
info_hdr_override::Dict{Symbol,String}
: header overrides for the summary statistics (default: use default headers);prune
: do not include skipped problems in the final statistics (default:true
);- any other keyword argument to be passed to the solver.
Return value
- a
DataFrame
where each row is a problem, minus the skipped ones ifprune
is true.
SolverBenchmark.to_gist
— Methodposted_gist = to_gist(results)
Create and post a gist with the benchmark results and performance profiles.
Inputs:
results::BenchmarkResults
: the result ofPkgBenchmark.benchmarkpkg
Output:
- the return value of GitHub.jl's
create_gist
.
SolverBenchmark.to_gist
— Methodposted_gist = to_gist(results, p)
Create and post a gist with the benchmark results and performance profiles.
Inputs:
results::BenchmarkResults
: the result ofPkgBenchmark.benchmarkpkg
p
:: the result ofprofile_solvers
.
Output:
- the return value of GitHub.jl's
create_gist
.