Default backend and performance in ADNLPModels
As illustrated in the tutorial on backends, ADNLPModels.jl use different backend for each method from the NLPModel API that are implemented. By default, it uses the following:
using ADNLPModels, NLPModels
f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
T = Float64
x0 = T[-1.2; 1.0]
lvar, uvar = zeros(T, 2), ones(T, 2) # must be of same type than `x0`
lcon, ucon = -T[0.5], T[0.5]
c!(cx, x) = begin
cx[1] = x[1] + x[2]
return cx
end
nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon)
get_adbackend(nlp)ADModelBackend{
ForwardDiffADGradient,
ForwardDiffADHvprod,
ForwardDiffADJprod,
ForwardDiffADJtprod,
SparseADJacobian,
SparseADHessian,
ForwardDiffADGHjvprod,
}Note that ForwardDiff.jl is mainly used as it is efficient and stable.
Predefined backends
Another way to know the default backends used is to check the constant ADNLPModels.default_backend.
ADNLPModels.default_backendDict{Symbol, Type} with 12 entries:
:hprod_residual_backend => ForwardDiffADHvprod
:jprod_backend => ForwardDiffADJprod
:jtprod_backend => ForwardDiffADJtprod
:gradient_backend => ForwardDiffADGradient
:hprod_backend => ForwardDiffADHvprod
:hessian_residual_backend => SparseADHessian
:ghjvprod_backend => ForwardDiffADGHjvprod
:hessian_backend => SparseADHessian
:jprod_residual_backend => ForwardDiffADJprod
:jtprod_residual_backend => ForwardDiffADJtprod
:jacobian_residual_backend => SparseADJacobian
:jacobian_backend => SparseADJacobianMore generally, the package anticipates more uses
ADNLPModels.predefined_backendDict{Symbol, Dict{Symbol}} with 5 entries:
:zygote => Dict{Symbol, Type}(:hprod_residual_backend=>ForwardDiffADHvprod…
:default => Dict{Symbol, Type}(:hprod_residual_backend=>ForwardDiffADHvprod…
:generic => Dict{Symbol, DataType}(:hprod_residual_backend=>GenericForwardD…
:enzyme => Dict{Symbol, Type}(:hprod_residual_backend=>EnzymeReverseADHvpr…
:optimized => Dict{Symbol, Type}(:hprod_residual_backend=>ReverseDiffADHvprod…The backend :optimized will mainly focus on the most efficient approaches, for instance using ReverseDiff to compute the gradient instead of ForwardDiff.
ADNLPModels.predefined_backend[:optimized]Dict{Symbol, Type} with 12 entries:
:hprod_residual_backend => ReverseDiffADHvprod
:jprod_backend => ForwardDiffADJprod
:jtprod_backend => ReverseDiffADJtprod
:gradient_backend => ReverseDiffADGradient
:hprod_backend => ReverseDiffADHvprod
:hessian_residual_backend => SparseReverseADHessian
:ghjvprod_backend => ForwardDiffADGHjvprod
:hessian_backend => SparseReverseADHessian
:jprod_residual_backend => ForwardDiffADJprod
:jtprod_residual_backend => ReverseDiffADJtprod
:jacobian_residual_backend => SparseADJacobian
:jacobian_backend => SparseADJacobianThe backend :generic focuses on backend that make no assumptions on the element type, see Creating an ADNLPModels backend that supports multiple precisions.
It is possible to use these pre-defined backends using the keyword argument backend when instantiating the model.
nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon, backend = :optimized)
get_adbackend(nlp)ADModelBackend{
ReverseDiffADGradient,
ReverseDiffADHvprod,
ForwardDiffADJprod,
ReverseDiffADJtprod,
SparseADJacobian,
SparseReverseADHessian,
ForwardDiffADGHjvprod,
}The backend :enzyme focuses on backend based on Enzyme.jl.
nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon, backend = :enzyme)
get_adbackend(nlp)ADModelBackend{
EnzymeReverseADGradient,
EnzymeReverseADHvprod,
EnzymeReverseADJprod,
EnzymeReverseADJtprod,
SparseEnzymeADJacobian,
SparseEnzymeADHessian,
ForwardDiffADGHjvprod,
}The backend :zygote focuses on backend based on Zygote.jl.