Reference

Contents

Index

JSOSolvers.FoSolverType
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.FomoSolverType
fomo(nlp; kwargs...)

A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

Algorithm description

The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).

Advanced usage

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)

No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • γ3 = T(1/2) : momentum factor βmax update parameter in case of unsuccessful iteration.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of objective evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • β = T(0.9) ∈ [0,1): target decay rate for the momentum.
  • θ1 = T(0.1): momentum contribution parameter for convergence condition (1).
  • θ2 = T(eps(T)^(1/3)): momentum contribution parameter for convergence condition (2).
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user || stats.stats = :unknown will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current gradient;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

fomo

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.LBFGSSolverType
lbfgs(nlp; kwargs...)

An implementation of a limited memory BFGS line-search method for unconstrained minimization.

For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.

solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • mem::Int = 5: memory parameter of the lbfgs algorithm.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • τ₁::T = T(0.9999): slope factor in the Wolfe condition when performing the line search.
  • bk_max:: Int = 25: maximum number of backtracks when performing the line search.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • verbose_subsolver::Int = 0: if > 0, display iteration information every verbose_subsolver iteration of the subsolver.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.TronSolverType
tron(nlp; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained optimization:

    min f(x)    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • μ₀::T = T(1e-2): algorithm parameter in (0, 0.5).
  • μ₁::T = one(T): algorithm parameter in (0, +∞).
  • σ::T = T(10): algorithm parameter in (1, +∞).
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem's iteration limit.
  • use_only_objgrad::Bool = false: If true, the algorithm uses only the function objgrad instead of obj and grad.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

TRON is described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
source
JSOSolvers.TronSolverNLSType
tron(nls; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:

min ½‖F(x)‖²    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver; kwargs...) solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • subsolver_type::Symbol = LsmrSolver: Krylov.jl method used as subproblem solver, see JSOSolvers.tronls_allowed_subsolvers for a list.
  • μ₀::T = T(1e-2): algorithm parameter in (0, 0.5).
  • μ₁::T = one(T): algorithm parameter in (0, +∞).
  • σ::T = T(10): algorithm parameter in (1, +∞).
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem iteration limit.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.TrunkSolverType
trunk(nlp; kwargs...)

A trust-region solver for unconstrained optimization using exact second derivatives.

For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolver(nlp, subsolver_type::Type{<:KrylovSolver} = CgSolver)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.
  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter.
  • monotone::Bool = true: algorithm parameter.
  • nm_itmax::Int = 25: algorithm parameter.
  • verbose::Int = 0: if > 0, display iteration information every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.
  • M: linear operator that models a Hermitian positive-definite matrix of size n; passed to Krylov subsolvers.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)
source
JSOSolvers.TrunkSolverNLSType
trunk(nls; kwargs...)

A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:

min ½‖F(x)‖²

For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver)
solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter.
  • monotone::Bool = true: algorithm parameter.
  • nm_itmax::Int = 25: algorithm parameter.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

See JSOSolvers.trunkls_allowed_subsolvers for a list of available KrylovSolver.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.R2Method
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.TRMethod
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.cauchy!Method

α, s = cauchy!(x, H, g, Δ, ℓ, u, s, Hs; μ₀ = 1e-2, μ₁ = 1.0, σ=10.0)

Computes a Cauchy step s = P(x - α g) - x for

min  q(s) = ¹/₂sᵀHs + gᵀs     s.t.    ‖s‖ ≦ μ₁Δ,  ℓ ≦ x + s ≦ u,

with the sufficient decrease condition

q(s) ≦ μ₀sᵀg.
source
JSOSolvers.cauchy_ls!Method

α, s = cauchy_ls!(x, A, Fx, g, Δ, ℓ, u, s, As; μ₀ = 1e-2, μ₁ = 1.0, σ=10.0)

Computes a Cauchy step s = P(x - α g) - x for

min  q(s) = ½‖As + Fx‖² - ½‖Fx‖²     s.t.    ‖s‖ ≦ μ₁Δ,  ℓ ≦ x + s ≦ u,

with the sufficient decrease condition

q(s) ≦ μ₀gᵀs,

where g = AᵀFx.

source
JSOSolvers.find_betaMethod
find_beta(m, mdot∇f, norm_∇f, μk, fk, max_obj_mem, β, θ1, θ2)

Compute βmax which saturates the contribution of the momentum term to the gradient. βmax is computed such that the two gradient-related conditions (first one is relaxed in the nonmonotone case) are ensured:

  1. (1-βmax) * ‖∇f(xk)‖² + βmax * ∇f(xk)ᵀm + (maxobjmem - fk)/μk ≥ θ1 * ‖∇f(xk)‖²
  2. ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) * ∇f(xk) .+ βmax .* m‖

with m the momentum term and mdot∇f = ∇f(xk)ᵀm, fk the model at s=0, max_obj_mem the largest objective value over the last M successful iterations.

source
JSOSolvers.foMethod
fo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)

A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)

R2 and TR runs fo with the dedicated step_backend keyword argument.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of evaluation of the objective function.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.fomoMethod
fomo(nlp; kwargs...)

A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.

Algorithm description

The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).

Advanced usage

For advanced usage, first define a FomoSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)

No momentum: if the user does not whish to use momentum (β = 0), it is recommended to use the memory-optimized fo method.

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • η1 = eps(T)^(1/4), η2 = T(0.95): step acceptance parameters.
  • γ1 = T(1/2), γ2 = T(2): regularization update parameters.
  • γ3 = T(1/2) : momentum factor βmax update parameter in case of unsuccessful iteration.
  • αmax = 1/eps(T): maximum step parameter for fomo algorithm.
  • max_eval::Int = -1: maximum number of objective evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • β = T(0.9) ∈ [0,1): target decay rate for the momentum.
  • θ1 = T(0.1): momentum contribution parameter for convergence condition (1).
  • θ2 = T(eps(T)^(1/3)): momentum contribution parameter for convergence condition (2).
  • M = 1 : requires objective decrease over the M last iterates (nonmonotone context). M=1 implies monotone behaviour.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • step_backend = r2_step(): step computation mode. Options are r2_step() for quadratic regulation step and tr_step() for first-order trust-region.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user || stats.stats = :unknown will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current gradient;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

fomo

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.init_alphaMethod
init_alpha(norm_∇fk::T, ::r2_step)
init_alpha(norm_∇fk::T, ::tr_step)

Initialize α step size parameter. Ensure first step is the same for quadratic regularization and trust region methods.

source
JSOSolvers.lbfgsMethod
lbfgs(nlp; kwargs...)

An implementation of a limited memory BFGS line-search method for unconstrained minimization.

For advanced usage, first define a LBFGSSolver to preallocate the memory used in the algorithm, and then call solve!.

solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • mem::Int = 5: memory parameter of the lbfgs algorithm.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • τ₁::T = T(0.9999): slope factor in the Wolfe condition when performing the line search.
  • bk_max:: Int = 25: maximum number of backtracks when performing the line search.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • verbose_subsolver::Int = 0: if > 0, display iteration information every verbose_subsolver iteration of the subsolver.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)

# output

"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
JSOSolvers.normM!Method
normM!(n, x, M, z)

Weighted norm of x with respect to M, i.e., z = sqrt(x' * M * x). Uses z as workspace.

source
JSOSolvers.projected_gauss_newton!Method
projected_gauss_newton!(solver, x, A, Fx, Δ, gctol, s, max_cgiter, ℓ, u; max_cgiter = 50, max_time = Inf, subsolver_verbose = 0)

Compute an approximate solution d for

min q(d) = ½‖Ad + Fx‖² - ½‖Fx‖²     s.t.    ℓ ≦ x + d ≦ u,  ‖d‖ ≦ Δ

starting from s. The steps are computed using the conjugate gradient method projected on the active bounds.

source
JSOSolvers.projected_line_search!Method

s = projected_line_search!(x, H, g, d, ℓ, u, Hs; μ₀ = 1e-2)

Performs a projected line search, searching for a step size t such that

0.5sᵀHs + sᵀg ≦ μ₀sᵀg,

where s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.

source
JSOSolvers.projected_line_search_ls!Method

s = projected_line_search_ls!(x, A, g, d, ℓ, u, As, s; μ₀ = 1e-2)

Performs a projected line search, searching for a step size t such that

½‖As + Fx‖² ≤ ½‖Fx‖² + μ₀FxᵀAs

where s = P(x + t * d) - x, while remaining on the same face as x + d. Backtracking is performed from t = 1.0. x is updated in place.

source
JSOSolvers.projected_newton!Method
projected_newton!(solver, x, H, g, Δ, cgtol, ℓ, u, s, Hs; max_time = Inf, max_cgiter = 50, subsolver_verbose = 0)

Compute an approximate solution d for

min q(d) = ¹/₂dᵀHs + dᵀg s.t. ℓ ≦ x + d ≦ u, ‖d‖ ≦ Δ

starting from s. The steps are computed using the conjugate gradient method projected on the active bounds.

source
JSOSolvers.step_multMethod
step_mult(α::T, norm_∇fk::T, ::r2_step)
step_mult(α::T, norm_∇fk::T, ::tr_step)

Compute step size multiplier: α for quadratic regularization(::r2 and ::R2og) and α/norm_∇fk for trust region (::tr).

source
JSOSolvers.tronMethod
tron(nls; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:

min ½‖F(x)‖²    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolverNLS to preallocate the memory used in the algorithm, and then call solve!: solver = TronSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver; kwargs...) solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • subsolver_type::Symbol = LsmrSolver: Krylov.jl method used as subproblem solver, see JSOSolvers.tronls_allowed_subsolvers for a list.
  • μ₀::T = T(1e-2): algorithm parameter in (0, 0.5).
  • μ₁::T = one(T): algorithm parameter in (0, +∞).
  • σ::T = T(10): algorithm parameter in (1, +∞).
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem iteration limit.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolverNLS are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.tronMethod
tron(nlp; kwargs...)

A pure Julia implementation of a trust-region solver for bound-constrained optimization:

    min f(x)    s.t.    ℓ ≦ x ≦ u

For advanced usage, first define a TronSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • μ₀::T = T(1e-2): algorithm parameter in (0, 0.5).
  • μ₁::T = one(T): algorithm parameter in (0, +∞).
  • σ::T = T(10): algorithm parameter in (1, +∞).
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • max_cgiter::Int = 50: subproblem's iteration limit.
  • use_only_objgrad::Bool = false: If true, the algorithm uses only the function objgrad instead of obj and grad.
  • cgtol::T = T(0.1): subproblem tolerance.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

The keyword arguments of TronSolver are passed to the TRONTrustRegion constructor.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

TRON is described in

Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
source
JSOSolvers.trunkMethod
trunk(nls; kwargs...)

A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:

min ½‖F(x)‖²

For advanced usage, first define a TrunkSolverNLS to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver)
solve!(solver, nls; kwargs...)

Arguments

  • nls::AbstractNLSModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • Fatol::T = √eps(T): absolute tolerance on the residual.
  • Frtol::T = eps(T): relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter.
  • monotone::Bool = true: algorithm parameter.
  • nm_itmax::Int = 25: algorithm parameter.
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.

See JSOSolvers.trunkls_allowed_subsolvers for a list of available KrylovSolver.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
source
JSOSolvers.trunkMethod
trunk(nlp; kwargs...)

A trust-region solver for unconstrained optimization using exact second derivatives.

For advanced usage, first define a TrunkSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = TrunkSolver(nlp, subsolver_type::Type{<:KrylovSolver} = CgSolver)
solve!(solver, nlp; kwargs...)

Arguments

  • nlp::AbstractNLPModel{T, V} represents the model to solve, see NLPModels.jl.

The keyword arguments may include

  • subsolver_logger::AbstractLogger = NullLogger(): subproblem's logger.
  • x::V = nlp.meta.x0: the initial guess.
  • atol::T = √eps(T): absolute tolerance.
  • rtol::T = √eps(T): relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.
  • max_eval::Int = -1: maximum number of objective function evaluations.
  • max_time::Float64 = 30.0: maximum time limit in seconds.
  • max_iter::Int = typemax(Int): maximum number of iterations.
  • bk_max::Int = 10: algorithm parameter.
  • monotone::Bool = true: algorithm parameter.
  • nm_itmax::Int = 25: algorithm parameter.
  • verbose::Int = 0: if > 0, display iteration information every verbose iteration.
  • subsolver_verbose::Int = 0: if > 0, display iteration information every subsolver_verbose iteration of the subsolver.
  • M: linear operator that models a Hermitian positive-definite matrix of size n; passed to Krylov subsolvers.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of the residual, for instance, the norm of the gradient for unconstrained problems;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

References

This implementation follows the description given in

A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857

The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.

Examples

using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)
source