SolverBenchmark.jl documentation

This package provides general tools for benchmarking solvers, focusing on a few guidelines:

  • The output of a solver's run on a suite of problems is a DataFrame, where each row is a different problem.
    • Since naming issues may arise (e.g., same problem with different number of variables), there must be an ID column;
  • The collection of two or more solver runs (DataFrames), is a Dict{Symbol,DataFrame}, where each key is a solver;

This package is developed focusing on Krylov.jl and JSOSolvers.jl, but they should be general enough to be used in other places.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.