Reference

Contents

Index

JSOSolvers.FOMOParameterSetType
FOMOParameterSet{T} <: AbstractParameterSet

This structure designed for fomo regroups the following parameters:

  • η1, η2: step acceptance parameters.
  • γ1, γ2: regularization update parameters.
  • γ3 : momentum factor βmax update parameter in case of unsuccessful iteration.
  • αmax: maximum step parameter for fomo algorithm.
  • β ∈ [0,1): target decay rate for the momentum.
  • θ1: momentum contribution parameter for convergence condition (1).
  • θ2: momentum contribution parameter for convergence condition (2).
  • M : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • step_backend: step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

An additional constructor is

FOMOParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • η1::T = eps(T)^(1 // 4)
  • η2::T = T(95/100)
  • γ1::T = T(1/2)
  • γ2::T = T(2)
  • γ3::T = T(1/2)
  • αmax::T = 1/eps(T)
  • β = T(9/10) ∈ [0,1)
  • θ1 = T(1/10)
  • θ2 = eps(T)^(1/3)
  • M = 1
  • `stepbackend = r2step()
source
JSOSolvers.FoSolverType
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1 // 4): algorithm parameter, see FOMOParameterSet.
  • η2 = T(95/100): algorithm parameter, see FOMOParameterSet.
  • γ1 = T(1/2): algorithm parameter, see FOMOParameterSet.
  • γ2 = T(2): algorithm parameter, see FOMOParameterSet.
  • αmax = 1/eps(T): algorithm parameter, see FOMOParameterSet.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : algorithm parameter, see FOMOParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): algorithm parameter, see FOMOParameterSet.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.FomoSolverType
fomo(nlp; kwargs...)

A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

Algorithm description

The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).

Advanced usage

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)

No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • η1 = eps(T)^(1 // 4), η2 = T(95/100): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • γ3 = T(1/2) : momentum factor βmax update parameter in case of unsuccessful iteration.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of objective evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • β = T(9/10) ∈ [0,1): target decay rate for the momentum.
  • θ1 = T(1/10): momentum contribution parameter for convergence condition (1).
  • θ2 = eps(T)^(1/3): momentum contribution parameter for convergence condition (2).
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

fomo

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.LBFGSParameterSetType
LBFGSParameterSet{T} <: AbstractParameterSet

This structure designed for lbfgs regroups the following parameters:

  • mem: memory parameter of the lbfgs algorithm
  • τ₁: slope factor in the Wolfe condition when performing the line search
  • bk_max: maximum number of backtracks when performing the line search.

An additional constructor is

LBFGSParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • mem::Int = 5
  • τ₁::T = T(0.9999)
  • bk_max:: Int = 25
source
JSOSolvers.LBFGSSolverType
lbfgs(nlp; kwargs...)

An implementation of a limited memory BFGS line-search method for unconstrained minimization.

For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.

solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • mem::Int = 5: algorithm parameter, see LBFGSParameterSet.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • τ₁::T = T(0.9999): algorithm parameter, see LBFGSParameterSet.
  • bk_max:: Int = 25: algorithm parameter, see LBFGSParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • verbose_subsolver::Int = 0: if > 0, display iteration information every verbose_subsolver iteration of the subsolver.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.TRONLSParameterSetType
TRONLSParameterSet{T} <: AbstractParameterSet

This structure designed for tron regroups the following parameters:

  • μ₀: algorithm parameter in (0, 0.5).
  • μ₁: algorithm parameter in (0, +∞).
  • σ: algorithm parameter in (1, +∞).

An additional constructor is

TRONLSParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • μ₀::T = T(1 / 100)
  • μ₁::T = T(1)
  • σ::T = T(10)
source
JSOSolvers.TRONParameterSetType
TRONParameterSet{T} <: AbstractParameterSet

This structure designed for tron regroups the following parameters:

  • μ₀::T: algorithm parameter in (0, 0.5).
  • μ₁::T: algorithm parameter in (0, +∞).
  • σ::T: algorithm parameter in (1, +∞).

An additional constructor is

TRONParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • μ₀::T = T(1 / 100)
  • μ₁::T = T(1)
  • σ::T = T(10)
source
JSOSolvers.TRUNKLSParameterSetType
TRUNKLSParameterSet <: AbstractParameterSet

This structure designed for tron regroups the following parameters:

  • bk_max: algorithm parameter.
  • monotone: algorithm parameter.
  • nm_itmax: algorithm parameter.

An additional constructor is

TRUNKLSParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • bk_max::Int = 10
  • monotone::Bool = true
  • nm_itmax::Int = 25
source
JSOSolvers.TRUNKParameterSetType
TRUNKParameterSet <: AbstractParameterSet

This structure designed for tron regroups the following parameters:

  • bk_max: algorithm parameter.
  • monotone: algorithm parameter.
  • nm_itmax: algorithm parameter.

An additional constructor is

TRUNKParameterSet(nlp: kwargs...)

where the kwargs are the parameters above.

Default values are:

  • bk_max::Int = 10
  • monotone::Bool = true
  • nm_itmax::Int = 25
source
JSOSolvers.TronSolverType
tron(nlp; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained optimization:

    min f(x)    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • μ₀::T = T(1 / 100): algorithm parameter, see TRONParameterSet.
  • μ₁::T = T(1): algorithm parameter, see TRONParameterSet.
  • σ::T = T(10): algorithm parameter, see TRONParameterSet.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem's iteration limit.
  • use_only_objgrad::Bool = false: If true, the algorithm uses only the function objgrad instead of obj and grad.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • callback: function called at each iteration, see Callback section.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

TRON is described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
source
JSOSolvers.TronSolverNLSType
tron(nls; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:

min ½‖F(x)‖²    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver::Symbol=:lsmr; kwargs...) solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • subsolver::Symbol = :lsmr: Krylov.jl method used as subproblem solver, see JSOSolvers.tronls_allowed_subsolvers for a list.
  • μ₀::T = T(1 / 100): algorithm parameter, see TRONLSParameterSet.
  • μ₁::T = T(1): algorithm parameter, see TRONLSParameterSet.
  • σ::T = T(10): algorithm parameter, see TRONLSParameterSet.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem iteration limit.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.TrunkSolverType
trunk(nlp; kwargs...)

A trust-region solver for unconstrained optimization using exact second derivatives.

For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolver(nlp, subsolver::Symbol = :cg)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.
  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter, see TRUNKParameterSet.
  • monotone::Bool = true: algorithm parameter, see TRUNKParameterSet.
  • nm_itmax::Int = 25: algorithm parameter, see TRUNKParameterSet.
  • verbose::Int = 0: if > 0, display iteration information every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.
  • M: linear operator that models a Hermitian positive-definite matrix of size n; passed to Krylov subsolvers.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)
source
JSOSolvers.TrunkSolverNLSType
trunk(nls; kwargs...)

A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:

min ½‖F(x)‖²

For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolverNLS(nls, subsolver::Symbol = :lsmr)
solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter, see TRUNKLSParameterSet.
  • monotone::Bool = true: algorithm parameter, see TRUNKLSParameterSet.
  • nm_itmax::Int = 25: algorithm parameter, see TRUNKLSParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

See JSOSolvers.trunkls_allowed_subsolvers for a list of available Krylov solvers.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.R2Method
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1 // 4): algorithm parameter, see FOMOParameterSet.
  • η2 = T(95/100): algorithm parameter, see FOMOParameterSet.
  • γ1 = T(1/2): algorithm parameter, see FOMOParameterSet.
  • γ2 = T(2): algorithm parameter, see FOMOParameterSet.
  • αmax = 1/eps(T): algorithm parameter, see FOMOParameterSet.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : algorithm parameter, see FOMOParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): algorithm parameter, see FOMOParameterSet.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.TRMethod
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1 // 4): algorithm parameter, see FOMOParameterSet.
  • η2 = T(95/100): algorithm parameter, see FOMOParameterSet.
  • γ1 = T(1/2): algorithm parameter, see FOMOParameterSet.
  • γ2 = T(2): algorithm parameter, see FOMOParameterSet.
  • αmax = 1/eps(T): algorithm parameter, see FOMOParameterSet.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : algorithm parameter, see FOMOParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): algorithm parameter, see FOMOParameterSet.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.cauchy!Method
α, s = cauchy!(x, H, g, Δ, ℓ, u, s, Hs; μ₀ = 1e-2, μ₁ = 1.0, σ = 10.0)

Computes a Cauchy step s = P(x - α g) - x for

min  q(s) = ¹/₂sᵀHs + gᵀs     s.t.    ‖s‖ ≦ μ₁Δ,  ℓ ≦ x + s ≦ u,

with the sufficient decrease condition

q(s) ≦ μ₀sᵀg.
source
JSOSolvers.cauchy_ls!Method
α, s = cauchy_ls!(x, A, Fx, g, Δ, ℓ, u, s, As; μ₀ = 1e-2, μ₁ = 1.0, σ = 10.0)

Computes a Cauchy step s = P(x - α g) - x for

min  q(s) = ½‖As + Fx‖² - ½‖Fx‖²     s.t.    ‖s‖ ≦ μ₁Δ,  ℓ ≦ x + s ≦ u,

with the sufficient decrease condition

q(s) ≦ μ₀gᵀs,

where g = AᵀFx.

source
JSOSolvers.find_betaMethod
find_beta(m, mdot∇f, norm_∇f, μk, fk, max_obj_mem, β, θ1, θ2)

Compute βmax which saturates the contribution of the momentum term to the gradient. βmax is computed such that the two gradient-related conditions (first one is relaxed in the nonmonotone case) are ensured:

  1. (1-βmax) * ‖∇f(xk)‖² + βmax * ∇f(xk)ᵀm + (maxobjmem - fk)/μk ≥ θ1 * ‖∇f(xk)‖²
  2. ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) * ∇f(xk) .+ βmax .* m‖

with m the momentum term and mdot∇f = ∇f(xk)ᵀm, fk the model at s=0, max_obj_mem the largest objective value over the last M successful iterations.

source
JSOSolvers.foMethod
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1 // 4): algorithm parameter, see FOMOParameterSet.
  • η2 = T(95/100): algorithm parameter, see FOMOParameterSet.
  • γ1 = T(1/2): algorithm parameter, see FOMOParameterSet.
  • γ2 = T(2): algorithm parameter, see FOMOParameterSet.
  • αmax = 1/eps(T): algorithm parameter, see FOMOParameterSet.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : algorithm parameter, see FOMOParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): algorithm parameter, see FOMOParameterSet.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.fomoMethod
fomo(nlp; kwargs...)

A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

Algorithm description

The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).

Advanced usage

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)

No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • η1 = eps(T)^(1 // 4), η2 = T(95/100): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • γ3 = T(1/2) : momentum factor βmax update parameter in case of unsuccessful iteration.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of objective evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • β = T(9/10) ∈ [0,1): target decay rate for the momentum.
  • θ1 = T(1/10): momentum contribution parameter for convergence condition (1).
  • θ2 = eps(T)^(1/3): momentum contribution parameter for convergence condition (2).
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Examples

fomo

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.init_alphaMethod
init_alpha(norm_∇fk::T, ::r2_step)
init_alpha(norm_∇fk::T, ::tr_step)

Initialize α step size parameter. Ensure first step is the same for quadratic regularization and trust region methods.

source
JSOSolvers.lbfgsMethod
lbfgs(nlp; kwargs...)

An implementation of a limited memory BFGS line-search method for unconstrained minimization.

For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.

solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • mem::Int = 5: algorithm parameter, see LBFGSParameterSet.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • τ₁::T = T(0.9999): algorithm parameter, see LBFGSParameterSet.
  • bk_max:: Int = 25: algorithm parameter, see LBFGSParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • verbose_subsolver::Int = 0: if > 0, display iteration information every verbose_subsolver iteration of the subsolver.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.normM!Method
normM!(n, x, M, z)

Weighted norm of x with respect to M, i.e., z = sqrt(x' * M * x). Uses z as workspace.

source
JSOSolvers.projected_gauss_newton!Method
projected_gauss_newton!(solver, x, A, Fx, Δ, gctol, s, max_cgiter, ℓ, u; max_cgiter = 50, max_time = Inf, subsolver_verbose = 0)

Compute an approximate solution d for

min q(d) = ½‖Ad + Fx‖² - ½‖Fx‖²     s.t.    ℓ ≦ x + d ≦ u,  ‖d‖ ≦ Δ

starting from s. The steps are computed using the conjugate gradient method projected on the active bounds.

source
JSOSolvers.projected_line_search!Method
s = projected_line_search!(x, H, g, d, ℓ, u, Hs, μ₀)

Performs a projected line search, searching for a step size t such that

0.5sᵀHs + sᵀg ≦ μ₀sᵀg,

where s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.

source
JSOSolvers.projected_line_search_ls!Method
s = projected_line_search_ls!(x, A, g, d, ℓ, u, As, s; μ₀ = 1e-2)

Performs a projected line search, searching for a step size t such that

½‖As + Fx‖² ≤ ½‖Fx‖² + μ₀FxᵀAs

where s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.

source
JSOSolvers.projected_newton!Method
projected_newton!(solver, x, H, g, Δ, cgtol, ℓ, u, s, Hs; max_time = Inf, max_cgiter = 50, subsolver_verbose = 0)

Compute an approximate solution d for

min q(d) = ¹/₂dᵀHs + dᵀg    s.t.    ℓ ≦ x + d ≦ u,  ‖d‖ ≦ Δ

starting from s. The steps are computed using the conjugate gradient method projected on the active bounds.

source
JSOSolvers.step_multMethod
step_mult(α::T, norm_∇fk::T, ::r2_step)
step_mult(α::T, norm_∇fk::T, ::tr_step)

Compute step size multiplier: α for quadratic regularization(::r2 and ::R2og) and α/norm_∇fk for trust region (::tr).

source
JSOSolvers.tronMethod
tron(nls; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:

min ½‖F(x)‖²    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver::Symbol=:lsmr; kwargs...) solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • subsolver::Symbol = :lsmr: Krylov.jl method used as subproblem solver, see JSOSolvers.tronls_allowed_subsolvers for a list.
  • μ₀::T = T(1 / 100): algorithm parameter, see TRONLSParameterSet.
  • μ₁::T = T(1): algorithm parameter, see TRONLSParameterSet.
  • σ::T = T(10): algorithm parameter, see TRONLSParameterSet.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem iteration limit.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.tronMethod
tron(nlp; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained optimization:

    min f(x)    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • μ₀::T = T(1 / 100): algorithm parameter, see TRONParameterSet.
  • μ₁::T = T(1): algorithm parameter, see TRONParameterSet.
  • σ::T = T(10): algorithm parameter, see TRONParameterSet.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem's iteration limit.
  • use_only_objgrad::Bool = false: If true, the algorithm uses only the function objgrad instead of obj and grad.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • callback: function called at each iteration, see Callback section.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

TRON is described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
source
JSOSolvers.trunkMethod
trunk(nls; kwargs...)

A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:

min ½‖F(x)‖²

For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolverNLS(nls, subsolver::Symbol = :lsmr)
solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter, see TRUNKLSParameterSet.
  • monotone::Bool = true: algorithm parameter, see TRUNKLSParameterSet.
  • nm_itmax::Int = 25: algorithm parameter, see TRUNKLSParameterSet.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

See JSOSolvers.trunkls_allowed_subsolvers for a list of available Krylov solvers.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.trunkMethod
trunk(nlp; kwargs...)

A trust-region solver for unconstrained optimization using exact second derivatives.

For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolver(nlp, subsolver::Symbol = :cg)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.
  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • callback: function called at each iteration, see Callback section.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter, see TRUNKParameterSet.
  • monotone::Bool = true: algorithm parameter, see TRUNKParameterSet.
  • nm_itmax::Int = 25: algorithm parameter, see TRUNKParameterSet.
  • verbose::Int = 0: if > 0, display iteration information every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.
  • M: linear operator that models a Hermitian positive-definite matrix of size n; passed to Krylov subsolvers.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)
source