Reference
Contents
Index
JSOSolvers.FOMOParameterSetJSOSolvers.FoSolverJSOSolvers.FomoSolverJSOSolvers.LBFGSParameterSetJSOSolvers.LBFGSSolverJSOSolvers.R2SolverJSOSolvers.TRONLSParameterSetJSOSolvers.TRONParameterSetJSOSolvers.TRUNKLSParameterSetJSOSolvers.TRUNKParameterSetJSOSolvers.TronSolverJSOSolvers.TronSolverNLSJSOSolvers.TrunkSolverJSOSolvers.TrunkSolverNLSJSOSolvers.R2JSOSolvers.TRJSOSolvers.cauchy!JSOSolvers.cauchy_ls!JSOSolvers.find_betaJSOSolvers.foJSOSolvers.fomoJSOSolvers.init_alphaJSOSolvers.lbfgsJSOSolvers.normM!JSOSolvers.projected_gauss_newton!JSOSolvers.projected_line_search!JSOSolvers.projected_line_search_ls!JSOSolvers.projected_newton!JSOSolvers.step_multJSOSolvers.tronJSOSolvers.tronJSOSolvers.trunkJSOSolvers.trunk
JSOSolvers.FOMOParameterSet — Type
FOMOParameterSet{T} <: AbstractParameterSetThis structure designed for fomo regroups the following parameters:
η1,η2: step acceptance parameters.γ1,γ2: regularization update parameters.γ3: momentum factor βmax update parameter in case of unsuccessful iteration.αmax: maximum step parameter for fomo algorithm.β ∈ [0,1): target decay rate for the momentum.θ1: momentum contribution parameter for convergence condition (1).θ2: momentum contribution parameter for convergence condition (2).M: requires objective decrease over theMlast iterates (nonmonotone context).M=1implies monotone behaviour.step_backend: step computation mode. Options arer2_step()for quadratic regulation step andtr_step()for first-order trust-region.
An additional constructor is
FOMOParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
η1::T = eps(T)^(1 // 4)η2::T = T(95/100)γ1::T = T(1/2)γ2::T = T(2)γ3::T = T(1/2)αmax::T = 1/eps(T)β = T(9/10) ∈ [0,1)θ1 = T(1/10)θ2 = eps(T)^(1/3)M = 1- `stepbackend = r2step()
JSOSolvers.FoSolver — Type
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)R2 and TR runs fo with the dedicated step_backend keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1 // 4): algorithm parameter, seeFOMOParameterSet.η2 = T(95/100): algorithm parameter, seeFOMOParameterSet.γ1 = T(1/2): algorithm parameter, seeFOMOParameterSet.γ2 = T(2): algorithm parameter, seeFOMOParameterSet.αmax = 1/eps(T): algorithm parameter, seeFOMOParameterSet.max_eval::Int = -1: maximum number of evaluation of the objective function.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.M = 1: algorithm parameter, seeFOMOParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): algorithm parameter, seeFOMOParameterSet.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.FomoSolver — Type
fomo(nlp; kwargs...)A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
Algorithm description
The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).
Advanced usage
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.η1 = eps(T)^(1 // 4),η2 = T(95/100): step acceptance parameters.γ1 = T(1/2),γ2 = T(2): regularization update parameters.γ3 = T(1/2): momentum factor βmax update parameter in case of unsuccessful iteration.αmax = 1/eps(T): maximum step parameter for fomo algorithm.max_eval::Int = -1: maximum number of objective evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.β = T(9/10) ∈ [0,1): target decay rate for the momentum.θ1 = T(1/10): momentum contribution parameter for convergence condition (1).θ2 = eps(T)^(1/3): momentum contribution parameter for convergence condition (2).M = 1: requires objective decrease over theMlast iterates (nonmonotone context).M=1implies monotone behaviour.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): step computation mode. Options arer2_step()for quadratic regulation step andtr_step()for first-order trust-region.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
fomo
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.LBFGSParameterSet — Type
LBFGSParameterSet{T} <: AbstractParameterSetThis structure designed for lbfgs regroups the following parameters:
mem: memory parameter of thelbfgsalgorithmτ₁: slope factor in the Wolfe condition when performing the line searchbk_max: maximum number of backtracks when performing the line search.
An additional constructor is
LBFGSParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
mem::Int = 5τ₁::T = T(0.9999)bk_max:: Int = 25
JSOSolvers.LBFGSSolver — Type
lbfgs(nlp; kwargs...)An implementation of a limited memory BFGS line-search method for unconstrained minimization.
For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.
solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.mem::Int = 5: algorithm parameter, seeLBFGSParameterSet.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.τ₁::T = T(0.9999): algorithm parameter, seeLBFGSParameterSet.bk_max:: Int = 25: algorithm parameter, seeLBFGSParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.verbose_subsolver::Int = 0: if > 0, display iteration information everyverbose_subsolveriteration of the subsolver.
Output
The returned value is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.R2Solver — Type
`R2Solver` is deprecated, please check the documentation of `R2`.JSOSolvers.TRONLSParameterSet — Type
TRONLSParameterSet{T} <: AbstractParameterSetThis structure designed for tron regroups the following parameters:
μ₀: algorithm parameter in (0, 0.5).μ₁: algorithm parameter in (0, +∞).σ: algorithm parameter in (1, +∞).
An additional constructor is
TRONLSParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
μ₀::T = T(1 / 100)μ₁::T = T(1)σ::T = T(10)
JSOSolvers.TRONParameterSet — Type
TRONParameterSet{T} <: AbstractParameterSetThis structure designed for tron regroups the following parameters:
μ₀::T: algorithm parameter in (0, 0.5).μ₁::T: algorithm parameter in (0, +∞).σ::T: algorithm parameter in (1, +∞).
An additional constructor is
TRONParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
μ₀::T = T(1 / 100)μ₁::T = T(1)σ::T = T(10)
JSOSolvers.TRUNKLSParameterSet — Type
TRUNKLSParameterSet <: AbstractParameterSetThis structure designed for tron regroups the following parameters:
bk_max: algorithm parameter.monotone: algorithm parameter.nm_itmax: algorithm parameter.
An additional constructor is
TRUNKLSParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
bk_max::Int = 10monotone::Bool = truenm_itmax::Int = 25
JSOSolvers.TRUNKParameterSet — Type
TRUNKParameterSet <: AbstractParameterSetThis structure designed for tron regroups the following parameters:
bk_max: algorithm parameter.monotone: algorithm parameter.nm_itmax: algorithm parameter.
An additional constructor is
TRUNKParameterSet(nlp: kwargs...)where the kwargs are the parameters above.
Default values are:
bk_max::Int = 10monotone::Bool = truenm_itmax::Int = 25
JSOSolvers.TronSolver — Type
tron(nlp; kwargs...)A pure Julia implementation of a trust-region solver for bound-constrained optimization:
min f(x) s.t. ℓ ≦ x ≦ uFor advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.μ₀::T = T(1 / 100): algorithm parameter, seeTRONParameterSet.μ₁::T = T(1): algorithm parameter, seeTRONParameterSet.σ::T = T(10): algorithm parameter, seeTRONParameterSet.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.max_cgiter::Int = 50: subproblem's iteration limit.use_only_objgrad::Bool = false: Iftrue, the algorithm uses only the functionobjgradinstead ofobjandgrad.cgtol::T = T(0.1): subproblem tolerance.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.callback: function called at each iteration, seeCallbacksection.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
TRON is described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)JSOSolvers.TronSolverNLS — Type
tron(nls; kwargs...)A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:
min ½‖F(x)‖² s.t. ℓ ≦ x ≦ uFor advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver::Symbol=:lsmr; kwargs...) solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.subsolver::Symbol = :lsmr:Krylov.jlmethod used as subproblem solver, seeJSOSolvers.tronls_allowed_subsolversfor a list.μ₀::T = T(1 / 100): algorithm parameter, seeTRONLSParameterSet.μ₁::T = T(1): algorithm parameter, seeTRONLSParameterSet.σ::T = T(10): algorithm parameter, seeTRONLSParameterSet.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.max_cgiter::Int = 50: subproblem iteration limit.cgtol::T = T(0.1): subproblem tolerance.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.Fatol::T = √eps(T): absolute tolerance on the residual.Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.callback: function called at each iteration, seeCallbacksection.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)JSOSolvers.TrunkSolver — Type
trunk(nlp; kwargs...)A trust-region solver for unconstrained optimization using exact second derivatives.
For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = TrunkSolver(nlp, subsolver::Symbol = :cg)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.bk_max::Int = 10: algorithm parameter, seeTRUNKParameterSet.monotone::Bool = true: algorithm parameter, seeTRUNKParameterSet.nm_itmax::Int = 25: algorithm parameter, seeTRUNKParameterSet.verbose::Int = 0: if > 0, display iteration information everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.M: linear operator that models a Hermitian positive-definite matrix of sizen; passed to Krylov subsolvers.
Output
The returned value is a GenericExecutionStats, see SolverCore.jl.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)JSOSolvers.TrunkSolverNLS — Type
trunk(nls; kwargs...)A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:
min ½‖F(x)‖²For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:
solver = TrunkSolverNLS(nls, subsolver::Symbol = :lsmr)
solve!(solver, nls; kwargs...)Arguments
nls::AbstractNLSModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.Fatol::T = √eps(T): absolute tolerance on the residual.Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.bk_max::Int = 10: algorithm parameter, seeTRUNKLSParameterSet.monotone::Bool = true: algorithm parameter, seeTRUNKLSParameterSet.nm_itmax::Int = 25: algorithm parameter, seeTRUNKLSParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
See JSOSolvers.trunkls_allowed_subsolvers for a list of available Krylov solvers.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)JSOSolvers.R2 — Method
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)R2 and TR runs fo with the dedicated step_backend keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1 // 4): algorithm parameter, seeFOMOParameterSet.η2 = T(95/100): algorithm parameter, seeFOMOParameterSet.γ1 = T(1/2): algorithm parameter, seeFOMOParameterSet.γ2 = T(2): algorithm parameter, seeFOMOParameterSet.αmax = 1/eps(T): algorithm parameter, seeFOMOParameterSet.max_eval::Int = -1: maximum number of evaluation of the objective function.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.M = 1: algorithm parameter, seeFOMOParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): algorithm parameter, seeFOMOParameterSet.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.TR — Method
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)R2 and TR runs fo with the dedicated step_backend keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1 // 4): algorithm parameter, seeFOMOParameterSet.η2 = T(95/100): algorithm parameter, seeFOMOParameterSet.γ1 = T(1/2): algorithm parameter, seeFOMOParameterSet.γ2 = T(2): algorithm parameter, seeFOMOParameterSet.αmax = 1/eps(T): algorithm parameter, seeFOMOParameterSet.max_eval::Int = -1: maximum number of evaluation of the objective function.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.M = 1: algorithm parameter, seeFOMOParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): algorithm parameter, seeFOMOParameterSet.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.cauchy! — Method
α, s = cauchy!(x, H, g, Δ, ℓ, u, s, Hs; μ₀ = 1e-2, μ₁ = 1.0, σ = 10.0)Computes a Cauchy step s = P(x - α g) - x for
min q(s) = ¹/₂sᵀHs + gᵀs s.t. ‖s‖ ≦ μ₁Δ, ℓ ≦ x + s ≦ u,with the sufficient decrease condition
q(s) ≦ μ₀sᵀg.JSOSolvers.cauchy_ls! — Method
α, s = cauchy_ls!(x, A, Fx, g, Δ, ℓ, u, s, As; μ₀ = 1e-2, μ₁ = 1.0, σ = 10.0)Computes a Cauchy step s = P(x - α g) - x for
min q(s) = ½‖As + Fx‖² - ½‖Fx‖² s.t. ‖s‖ ≦ μ₁Δ, ℓ ≦ x + s ≦ u,with the sufficient decrease condition
q(s) ≦ μ₀gᵀs,where g = AᵀFx.
JSOSolvers.find_beta — Method
find_beta(m, mdot∇f, norm_∇f, μk, fk, max_obj_mem, β, θ1, θ2)Compute βmax which saturates the contribution of the momentum term to the gradient. βmax is computed such that the two gradient-related conditions (first one is relaxed in the nonmonotone case) are ensured:
- (1-βmax) * ‖∇f(xk)‖² + βmax * ∇f(xk)ᵀm + (maxobjmem - fk)/μk ≥ θ1 * ‖∇f(xk)‖²
- ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) * ∇f(xk) .+ βmax .* m‖
with m the momentum term and mdot∇f = ∇f(xk)ᵀm, fk the model at s=0, max_obj_mem the largest objective value over the last M successful iterations.
JSOSolvers.fo — Method
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)R2 and TR runs fo with the dedicated step_backend keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1 // 4): algorithm parameter, seeFOMOParameterSet.η2 = T(95/100): algorithm parameter, seeFOMOParameterSet.γ1 = T(1/2): algorithm parameter, seeFOMOParameterSet.γ2 = T(2): algorithm parameter, seeFOMOParameterSet.αmax = 1/eps(T): algorithm parameter, seeFOMOParameterSet.max_eval::Int = -1: maximum number of evaluation of the objective function.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.M = 1: algorithm parameter, seeFOMOParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): algorithm parameter, seeFOMOParameterSet.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.fomo — Method
fomo(nlp; kwargs...)A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
Algorithm description
The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).
Advanced usage
For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.
Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.η1 = eps(T)^(1 // 4),η2 = T(95/100): step acceptance parameters.γ1 = T(1/2),γ2 = T(2): regularization update parameters.γ3 = T(1/2): momentum factor βmax update parameter in case of unsuccessful iteration.αmax = 1/eps(T): maximum step parameter for fomo algorithm.max_eval::Int = -1: maximum number of objective evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.β = T(9/10) ∈ [0,1): target decay rate for the momentum.θ1 = T(1/10): momentum contribution parameter for convergence condition (1).θ2 = eps(T)^(1/3): momentum contribution parameter for convergence condition (2).M = 1: requires objective decrease over theMlast iterates (nonmonotone context).M=1implies monotone behaviour.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.step_backend = r2_step(): step computation mode. Options arer2_step()for quadratic regulation step andtr_step()for first-order trust-region.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Examples
fomo
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.init_alpha — Method
init_alpha(norm_∇fk::T, ::r2_step)
init_alpha(norm_∇fk::T, ::tr_step)Initialize α step size parameter. Ensure first step is the same for quadratic regularization and trust region methods.
JSOSolvers.lbfgs — Method
lbfgs(nlp; kwargs...)An implementation of a limited memory BFGS line-search method for unconstrained minimization.
For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.
solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.mem::Int = 5: algorithm parameter, seeLBFGSParameterSet.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.τ₁::T = T(0.9999): algorithm parameter, seeLBFGSParameterSet.bk_max:: Int = 25: algorithm parameter, seeLBFGSParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.verbose_subsolver::Int = 0: if > 0, display iteration information everyverbose_subsolveriteration of the subsolver.
Output
The returned value is a GenericExecutionStats, see SolverCore.jl.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)
# output
"Execution stats: first-order stationary"using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"JSOSolvers.normM! — Method
normM!(n, x, M, z)Weighted norm of x with respect to M, i.e., z = sqrt(x' * M * x). Uses z as workspace.
JSOSolvers.projected_gauss_newton! — Method
projected_gauss_newton!(solver, x, A, Fx, Δ, gctol, s, max_cgiter, ℓ, u; max_cgiter = 50, max_time = Inf, subsolver_verbose = 0)Compute an approximate solution d for
min q(d) = ½‖Ad + Fx‖² - ½‖Fx‖² s.t. ℓ ≦ x + d ≦ u, ‖d‖ ≦ Δstarting from s. The steps are computed using the conjugate gradient method projected on the active bounds.
JSOSolvers.projected_line_search! — Method
s = projected_line_search!(x, H, g, d, ℓ, u, Hs, μ₀)Performs a projected line search, searching for a step size t such that
0.5sᵀHs + sᵀg ≦ μ₀sᵀg,where s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.
JSOSolvers.projected_line_search_ls! — Method
s = projected_line_search_ls!(x, A, g, d, ℓ, u, As, s; μ₀ = 1e-2)Performs a projected line search, searching for a step size t such that
½‖As + Fx‖² ≤ ½‖Fx‖² + μ₀FxᵀAswhere s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.
JSOSolvers.projected_newton! — Method
projected_newton!(solver, x, H, g, Δ, cgtol, ℓ, u, s, Hs; max_time = Inf, max_cgiter = 50, subsolver_verbose = 0)Compute an approximate solution d for
min q(d) = ¹/₂dᵀHs + dᵀg s.t. ℓ ≦ x + d ≦ u, ‖d‖ ≦ Δstarting from s. The steps are computed using the conjugate gradient method projected on the active bounds.
JSOSolvers.step_mult — Method
step_mult(α::T, norm_∇fk::T, ::r2_step)
step_mult(α::T, norm_∇fk::T, ::tr_step)Compute step size multiplier: α for quadratic regularization(::r2 and ::R2og) and α/norm_∇fk for trust region (::tr).
JSOSolvers.tron — Method
tron(nls; kwargs...)A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:
min ½‖F(x)‖² s.t. ℓ ≦ x ≦ uFor advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver::Symbol=:lsmr; kwargs...) solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.subsolver::Symbol = :lsmr:Krylov.jlmethod used as subproblem solver, seeJSOSolvers.tronls_allowed_subsolversfor a list.μ₀::T = T(1 / 100): algorithm parameter, seeTRONLSParameterSet.μ₁::T = T(1): algorithm parameter, seeTRONLSParameterSet.σ::T = T(10): algorithm parameter, seeTRONLSParameterSet.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.max_cgiter::Int = 50: subproblem iteration limit.cgtol::T = T(0.1): subproblem tolerance.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.Fatol::T = √eps(T): absolute tolerance on the residual.Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.callback: function called at each iteration, seeCallbacksection.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)JSOSolvers.tron — Method
tron(nlp; kwargs...)A pure Julia implementation of a trust-region solver for bound-constrained optimization:
min f(x) s.t. ℓ ≦ x ≦ uFor advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.μ₀::T = T(1 / 100): algorithm parameter, seeTRONParameterSet.μ₁::T = T(1): algorithm parameter, seeTRONParameterSet.σ::T = T(10): algorithm parameter, seeTRONParameterSet.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.max_cgiter::Int = 50: subproblem's iteration limit.use_only_objgrad::Bool = false: Iftrue, the algorithm uses only the functionobjgradinstead ofobjandgrad.cgtol::T = T(0.1): subproblem tolerance.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.callback: function called at each iteration, seeCallbacksection.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
TRON is described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)JSOSolvers.trunk — Method
trunk(nls; kwargs...)A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:
min ½‖F(x)‖²For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:
solver = TrunkSolverNLS(nls, subsolver::Symbol = :lsmr)
solve!(solver, nls; kwargs...)Arguments
nls::AbstractNLSModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.Fatol::T = √eps(T): absolute tolerance on the residual.Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.bk_max::Int = 10: algorithm parameter, seeTRUNKLSParameterSet.monotone::Bool = true: algorithm parameter, seeTRUNKLSParameterSet.nm_itmax::Int = 25: algorithm parameter, seeTRUNKLSParameterSet.verbose::Int = 0: if > 0, display iteration details everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.
See JSOSolvers.trunkls_allowed_subsolvers for a list of available Krylov solvers.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)JSOSolvers.trunk — Method
trunk(nlp; kwargs...)A trust-region solver for unconstrained optimization using exact second derivatives.
For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = TrunkSolver(nlp, subsolver::Symbol = :cg)
solve!(solver, nlp; kwargs...)Arguments
nlp::AbstractNLPModel{T, V}represents the model to solve, seeNLPModels.jl.
The keyword arguments may include
subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.x::V = nlp.meta.x0: the initial guess.atol::T = √eps(T): absolute tolerance.rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.callback: function called at each iteration, seeCallbacksection.max_eval::Int = -1: maximum number of objective function evaluations.max_time::Float64 = 30.0: maximum time limit in seconds.max_iter::Int = typemax(Int): maximum number of iterations.bk_max::Int = 10: algorithm parameter, seeTRUNKParameterSet.monotone::Bool = true: algorithm parameter, seeTRUNKParameterSet.nm_itmax::Int = 25: algorithm parameter, seeTRUNKParameterSet.verbose::Int = 0: if > 0, display iteration information everyverboseiteration.subsolver_verbose::Int = 0: if > 0, display iteration information everysubsolver_verboseiteration of the subsolver.M: linear operator that models a Hermitian positive-definite matrix of sizen; passed to Krylov subsolvers.
Output
The returned value is a GenericExecutionStats, see SolverCore.jl.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)