Reference
Contents
Index
JSOSolvers.FoSolver
JSOSolvers.FomoSolver
JSOSolvers.LBFGSSolver
JSOSolvers.R2Solver
JSOSolvers.TronSolver
JSOSolvers.TronSolverNLS
JSOSolvers.TrunkSolver
JSOSolvers.TrunkSolverNLS
JSOSolvers.R2
JSOSolvers.TR
JSOSolvers.cauchy!
JSOSolvers.cauchy_ls!
JSOSolvers.find_beta
JSOSolvers.fo
JSOSolvers.fomo
JSOSolvers.init_alpha
JSOSolvers.lbfgs
JSOSolvers.normM!
JSOSolvers.projected_gauss_newton!
JSOSolvers.projected_line_search!
JSOSolvers.projected_line_search_ls!
JSOSolvers.projected_newton!
JSOSolvers.step_mult
JSOSolvers.tron
JSOSolvers.tron
JSOSolvers.trunk
JSOSolvers.trunk
JSOSolvers.FoSolver
— Typefo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)
A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)
R2
and TR
runs fo
with the dedicated step_backend
keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of evaluation of the objective function.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.FomoSolver
— Typefomo(nlp; kwargs...)
A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
Algorithm description
The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).
Advanced usage
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)
No momentum: if the user does not whish to use momentum (β
= 0), it is recommended to use the memory-optimized fo
method.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.γ3 = T(1/2)
: momentum factor βmax update parameter in case of unsuccessful iteration.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of objective evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.β = T(0.9) ∈ [0,1)
: target decay rate for the momentum.θ1 = T(0.1)
: momentum contribution parameter for convergence condition (1).θ2 = T(eps(T)^(1/3))
: momentum contribution parameter for convergence condition (2).M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user || stats.stats = :unknown
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of current gradient;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
fomo
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.LBFGSSolver
— Typelbfgs(nlp; kwargs...)
An implementation of a limited memory BFGS line-search method for unconstrained minimization.
For advanced usage, first define a LBFGSSolver
to preallocate the memory used in the algorithm, and then call solve!
.
solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.mem::Int = 5
: memory parameter of thelbfgs
algorithm.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.τ₁::T = T(0.9999)
: slope factor in the Wolfe condition when performing the line search.bk_max:: Int = 25
: maximum number of backtracks when performing the line search.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.verbose_subsolver::Int = 0
: if > 0, display iteration information everyverbose_subsolver
iteration of the subsolver.
Output
The returned value is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.R2Solver
— Type`R2Solver` is deprecated, please check the documentation of `R2`.
JSOSolvers.TronSolver
— Typetron(nlp; kwargs...)
A pure Julia implementation of a trust-region solver for bound-constrained optimization:
min f(x) s.t. ℓ ≦ x ≦ u
For advanced usage, first define a TronSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.μ₀::T = T(1e-2)
: algorithm parameter in (0, 0.5).μ₁::T = one(T)
: algorithm parameter in (0, +∞).σ::T = T(10)
: algorithm parameter in (1, +∞).max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.max_cgiter::Int = 50
: subproblem's iteration limit.use_only_objgrad::Bool = false
: Iftrue
, the algorithm uses only the functionobjgrad
instead ofobj
andgrad
.cgtol::T = T(0.1)
: subproblem tolerance.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
The keyword arguments of TronSolver
are passed to the TRONTrustRegion
constructor.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
TRON is described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
JSOSolvers.TronSolverNLS
— Typetron(nls; kwargs...)
A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:
min ½‖F(x)‖² s.t. ℓ ≦ x ≦ u
For advanced usage, first define a TronSolverNLS
to preallocate the memory used in the algorithm, and then call solve!
: solver = TronSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver; kwargs...) solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.subsolver_type::Symbol = LsmrSolver
:Krylov.jl
method used as subproblem solver, seeJSOSolvers.tronls_allowed_subsolvers
for a list.μ₀::T = T(1e-2)
: algorithm parameter in (0, 0.5).μ₁::T = one(T)
: algorithm parameter in (0, +∞).σ::T = T(10)
: algorithm parameter in (1, +∞).max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.max_cgiter::Int = 50
: subproblem iteration limit.cgtol::T = T(0.1)
: subproblem tolerance.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.Fatol::T = √eps(T)
: absolute tolerance on the residual.Frtol::T = eps(T)
: relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
The keyword arguments of TronSolverNLS
are passed to the TRONTrustRegion
constructor.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
JSOSolvers.TrunkSolver
— Typetrunk(nlp; kwargs...)
A trust-region solver for unconstrained optimization using exact second derivatives.
For advanced usage, first define a TrunkSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TrunkSolver(nlp, subsolver_type::Type{<:KrylovSolver} = CgSolver)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
subsolver_logger::AbstractLogger = NullLogger()
: subproblem's logger.x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.bk_max::Int = 10
: algorithm parameter.monotone::Bool = true
: algorithm parameter.nm_itmax::Int = 25
: algorithm parameter.verbose::Int = 0
: if > 0, display iteration information everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.M
: linear operator that models a Hermitian positive-definite matrix of sizen
; passed to Krylov subsolvers.
Output
The returned value is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857
The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)
JSOSolvers.TrunkSolverNLS
— Typetrunk(nls; kwargs...)
A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:
min ½‖F(x)‖²
For advanced usage, first define a TrunkSolverNLS
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TrunkSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver)
solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.Fatol::T = √eps(T)
: absolute tolerance on the residual.Frtol::T = eps(T)
: relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.bk_max::Int = 10
: algorithm parameter.monotone::Bool = true
: algorithm parameter.nm_itmax::Int = 25
: algorithm parameter.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
See JSOSolvers.trunkls_allowed_subsolvers
for a list of available KrylovSolver
.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857
The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
JSOSolvers.R2
— Methodfo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)
A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)
R2
and TR
runs fo
with the dedicated step_backend
keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of evaluation of the objective function.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.TR
— Methodfo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)
A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)
R2
and TR
runs fo
with the dedicated step_backend
keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of evaluation of the objective function.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.cauchy!
— Methodα, s = cauchy!(x, H, g, Δ, ℓ, u, s, Hs; μ₀ = 1e-2, μ₁ = 1.0, σ=10.0)
Computes a Cauchy step s = P(x - α g) - x
for
min q(s) = ¹/₂sᵀHs + gᵀs s.t. ‖s‖ ≦ μ₁Δ, ℓ ≦ x + s ≦ u,
with the sufficient decrease condition
q(s) ≦ μ₀sᵀg.
JSOSolvers.cauchy_ls!
— Methodα, s = cauchy_ls!(x, A, Fx, g, Δ, ℓ, u, s, As; μ₀ = 1e-2, μ₁ = 1.0, σ=10.0)
Computes a Cauchy step s = P(x - α g) - x
for
min q(s) = ½‖As + Fx‖² - ½‖Fx‖² s.t. ‖s‖ ≦ μ₁Δ, ℓ ≦ x + s ≦ u,
with the sufficient decrease condition
q(s) ≦ μ₀gᵀs,
where g = AᵀFx.
JSOSolvers.find_beta
— Methodfind_beta(m, mdot∇f, norm_∇f, μk, fk, max_obj_mem, β, θ1, θ2)
Compute βmax which saturates the contribution of the momentum term to the gradient. βmax
is computed such that the two gradient-related conditions (first one is relaxed in the nonmonotone case) are ensured:
- (1-βmax) * ‖∇f(xk)‖² + βmax * ∇f(xk)ᵀm + (maxobjmem - fk)/μk ≥ θ1 * ‖∇f(xk)‖²
- ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) * ∇f(xk) .+ βmax .* m‖
with m
the momentum term and mdot∇f = ∇f(xk)ᵀm
, fk
the model at s=0, max_obj_mem
the largest objective value over the last M successful iterations.
JSOSolvers.fo
— Methodfo(nlp; kwargs...)
R2(nlp; kwargs...)
TR(nlp; kwargs...)
A First-Order (FO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FoSolver(nlp)
solve!(solver, nlp; kwargs...)
R2
and TR
runs fo
with the dedicated step_backend
keyword argument.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of evaluation of the objective function.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fo(nlp) # run with step_backend = r2_step(), equivalent to R2(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.fomo
— Methodfomo(nlp; kwargs...)
A First-Order with MOmentum (FOMO) model-based method for unconstrained optimization. Supports quadratic regularization and trust region method with linear model.
Algorithm description
The step is computed along d = - (1-βmax) .* ∇f(xk) - βmax .* mk with mk the memory of past gradients (initialized at 0), and updated at each successful iteration as mk .= ∇f(xk) .* (1 - βmax) .+ mk .* βmax and βmax ∈ [0,β] chosen as to ensure d is gradient-related, i.e., the following 2 conditions are satisfied: (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk ≥ θ1 * ‖∇f(xk)‖² (1) ‖∇f(xk)‖ ≥ θ2 * ‖(1-βmax) . ∇f(xk) + βmax . mk‖ (2) In the nonmonotone case, (1) rewrites (1-βmax) .* ∇f(xk) + βmax .* ∇f(xk)ᵀmk + (fm - fk)/μk ≥ θ1 * ‖∇f(xk)‖², with fm the largest objective value over the last M successful iterations, and fk = f(xk).
Advanced usage
For advanced usage, first define a FomoSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = FomoSolver(nlp)
solve!(solver, nlp; kwargs...)
No momentum: if the user does not whish to use momentum (β
= 0), it is recommended to use the memory-optimized fo
method.
Arguments
nlp::AbstractNLPModel{T, V}
is the model to solve, seeNLPModels.jl
.
Keyword arguments
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance: algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.η1 = eps(T)^(1/4)
,η2 = T(0.95)
: step acceptance parameters.γ1 = T(1/2)
,γ2 = T(2)
: regularization update parameters.γ3 = T(1/2)
: momentum factor βmax update parameter in case of unsuccessful iteration.αmax = 1/eps(T)
: maximum step parameter for fomo algorithm.max_eval::Int = -1
: maximum number of objective evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.β = T(0.9) ∈ [0,1)
: target decay rate for the momentum.θ1 = T(0.1)
: momentum contribution parameter for convergence condition (1).θ2 = T(eps(T)^(1/3))
: momentum contribution parameter for convergence condition (2).M = 1
: requires objective decrease over theM
last iterates (nonmonotone context).M=1
implies monotone behaviour.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.step_backend = r2_step()
: step computation mode. Options arer2_step()
for quadratic regulation step andtr_step()
for first-order trust-region.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user || stats.stats = :unknown
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of current gradient;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
fomo
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = fomo(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = FomoSolver(nlp);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.init_alpha
— Methodinit_alpha(norm_∇fk::T, ::r2_step)
init_alpha(norm_∇fk::T, ::tr_step)
Initialize α
step size parameter. Ensure first step is the same for quadratic regularization and trust region methods.
JSOSolvers.lbfgs
— Methodlbfgs(nlp; kwargs...)
An implementation of a limited memory BFGS line-search method for unconstrained minimization.
For advanced usage, first define a LBFGSSolver
to preallocate the memory used in the algorithm, and then call solve!
.
solver = LBFGSSolver(nlp; mem::Int = 5)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.mem::Int = 5
: memory parameter of thelbfgs
algorithm.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.τ₁::T = T(0.9999)
: slope factor in the Wolfe condition when performing the line search.bk_max:: Int = 25
: maximum number of backtracks when performing the line search.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.verbose_subsolver::Int = 0
: if > 0, display iteration information everyverbose_subsolver
iteration of the subsolver.
Output
The returned value is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
stats = lbfgs(nlp)
# output
"Execution stats: first-order stationary"
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3));
solver = LBFGSSolver(nlp; mem = 5);
stats = solve!(solver, nlp)
# output
"Execution stats: first-order stationary"
JSOSolvers.normM!
— MethodnormM!(n, x, M, z)
Weighted norm of x
with respect to M
, i.e., z = sqrt(x' * M * x)
. Uses z
as workspace.
JSOSolvers.projected_gauss_newton!
— Methodprojected_gauss_newton!(solver, x, A, Fx, Δ, gctol, s, max_cgiter, ℓ, u; max_cgiter = 50, max_time = Inf, subsolver_verbose = 0)
Compute an approximate solution d
for
min q(d) = ½‖Ad + Fx‖² - ½‖Fx‖² s.t. ℓ ≦ x + d ≦ u, ‖d‖ ≦ Δ
starting from s
. The steps are computed using the conjugate gradient method projected on the active bounds.
JSOSolvers.projected_line_search!
— Methods = projected_line_search!(x, H, g, d, ℓ, u, Hs; μ₀ = 1e-2)
Performs a projected line search, searching for a step size t
such that
0.5sᵀHs + sᵀg ≦ μ₀sᵀg,
where s = P(x + t * d) - x
, while remaining on the same face as x + d
. Backtracking is performed from t = 1.0. x
is updated in place.
JSOSolvers.projected_line_search_ls!
— Methods = projected_line_search_ls!(x, A, g, d, ℓ, u, As, s; μ₀ = 1e-2)
Performs a projected line search, searching for a step size t
such that
½‖As + Fx‖² ≤ ½‖Fx‖² + μ₀FxᵀAs
where s = P(x + t * d) - x
, while remaining on the same face as x + d
. Backtracking is performed from t = 1.0. x
is updated in place.
JSOSolvers.projected_newton!
— Methodprojected_newton!(solver, x, H, g, Δ, cgtol, ℓ, u, s, Hs; max_time = Inf, max_cgiter = 50, subsolver_verbose = 0)
Compute an approximate solution d
for
min q(d) = ¹/₂dᵀHs + dᵀg s.t. ℓ ≦ x + d ≦ u, ‖d‖ ≦ Δ
starting from s
. The steps are computed using the conjugate gradient method projected on the active bounds.
JSOSolvers.step_mult
— Methodstep_mult(α::T, norm_∇fk::T, ::r2_step)
step_mult(α::T, norm_∇fk::T, ::tr_step)
Compute step size multiplier: α
for quadratic regularization(::r2
and ::R2og
) and α/norm_∇fk
for trust region (::tr
).
JSOSolvers.tron
— Methodtron(nls; kwargs...)
A pure Julia implementation of a trust-region solver for bound-constrained nonlinear least-squares problems:
min ½‖F(x)‖² s.t. ℓ ≦ x ≦ u
For advanced usage, first define a TronSolverNLS
to preallocate the memory used in the algorithm, and then call solve!
: solver = TronSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver; kwargs...) solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.subsolver_type::Symbol = LsmrSolver
:Krylov.jl
method used as subproblem solver, seeJSOSolvers.tronls_allowed_subsolvers
for a list.μ₀::T = T(1e-2)
: algorithm parameter in (0, 0.5).μ₁::T = one(T)
: algorithm parameter in (0, +∞).σ::T = T(10)
: algorithm parameter in (1, +∞).max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.max_cgiter::Int = 50
: subproblem iteration limit.cgtol::T = T(0.1)
: subproblem tolerance.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.Fatol::T = √eps(T)
: absolute tolerance on the residual.Frtol::T = eps(T)
: relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
The keyword arguments of TronSolverNLS
are passed to the TRONTrustRegion
constructor.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This is an adaptation for bound-constrained nonlinear least-squares problems of the TRON method described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
stats = tron(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2, zeros(2), 0.5 * ones(2))
solver = TronSolverNLS(nls)
stats = solve!(solver, nls)
JSOSolvers.tron
— Methodtron(nlp; kwargs...)
A pure Julia implementation of a trust-region solver for bound-constrained optimization:
min f(x) s.t. ℓ ≦ x ≦ u
For advanced usage, first define a TronSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TronSolver(nlp; kwargs...)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.μ₀::T = T(1e-2)
: algorithm parameter in (0, 0.5).μ₁::T = one(T)
: algorithm parameter in (0, +∞).σ::T = T(10)
: algorithm parameter in (1, +∞).max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.max_cgiter::Int = 50
: subproblem's iteration limit.use_only_objgrad::Bool = false
: Iftrue
, the algorithm uses only the functionobjgrad
instead ofobj
andgrad
.cgtol::T = T(0.1)
: subproblem tolerance.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖x - Proj(x - ∇f(xᵏ))‖ ≤ atol + rtol * ‖∇f(x⁰)‖. Proj denotes here the projection over the bounds.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
The keyword arguments of TronSolver
are passed to the TRONTrustRegion
constructor.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
TRON is described in
Chih-Jen Lin and Jorge J. Moré, *Newton's Method for Large Bound-Constrained
Optimization Problems*, SIAM J. Optim., 9(4), 1100–1127, 1999.
DOI: 10.1137/S1052623498345075
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
stats = tron(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x), ones(3), zeros(3), 2 * ones(3));
solver = TronSolver(nlp);
stats = solve!(solver, nlp)
JSOSolvers.trunk
— Methodtrunk(nls; kwargs...)
A pure Julia implementation of a trust-region solver for nonlinear least-squares problems:
min ½‖F(x)‖²
For advanced usage, first define a TrunkSolverNLS
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TrunkSolverNLS(nls, subsolver_type::Type{<:KrylovSolver} = LsmrSolver)
solve!(solver, nls; kwargs...)
Arguments
nls::AbstractNLSModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.Fatol::T = √eps(T)
: absolute tolerance on the residual.Frtol::T = eps(T)
: relative tolerance on the residual, the algorithm stops when ‖F(xᵏ)‖ ≤ Fatol + Frtol * ‖F(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.bk_max::Int = 10
: algorithm parameter.monotone::Bool = true
: algorithm parameter.nm_itmax::Int = 25
: algorithm parameter.verbose::Int = 0
: if > 0, display iteration details everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.
See JSOSolvers.trunkls_allowed_subsolvers
for a list of available KrylovSolver
.
Output
The value returned is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857
The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
stats = trunk(nls)
using JSOSolvers, ADNLPModels
F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)
solver = TrunkSolverNLS(nls)
stats = solve!(solver, nls)
JSOSolvers.trunk
— Methodtrunk(nlp; kwargs...)
A trust-region solver for unconstrained optimization using exact second derivatives.
For advanced usage, first define a TrunkSolver
to preallocate the memory used in the algorithm, and then call solve!
:
solver = TrunkSolver(nlp, subsolver_type::Type{<:KrylovSolver} = CgSolver)
solve!(solver, nlp; kwargs...)
Arguments
nlp::AbstractNLPModel{T, V}
represents the model to solve, seeNLPModels.jl
.
The keyword arguments may include
subsolver_logger::AbstractLogger = NullLogger()
: subproblem's logger.x::V = nlp.meta.x0
: the initial guess.atol::T = √eps(T)
: absolute tolerance.rtol::T = √eps(T)
: relative tolerance, the algorithm stops when ‖∇f(xᵏ)‖ ≤ atol + rtol * ‖∇f(x⁰)‖.max_eval::Int = -1
: maximum number of objective function evaluations.max_time::Float64 = 30.0
: maximum time limit in seconds.max_iter::Int = typemax(Int)
: maximum number of iterations.bk_max::Int = 10
: algorithm parameter.monotone::Bool = true
: algorithm parameter.nm_itmax::Int = 25
: algorithm parameter.verbose::Int = 0
: if > 0, display iteration information everyverbose
iteration.subsolver_verbose::Int = 0
: if > 0, display iteration information everysubsolver_verbose
iteration of the subsolver.M
: linear operator that models a Hermitian positive-definite matrix of sizen
; passed to Krylov subsolvers.
Output
The returned value is a GenericExecutionStats
, see SolverCore.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver.x
: current iterate;solver.gx
: current gradient;stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of the residual, for instance, the norm of the gradient for unconstrained problems;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
References
This implementation follows the description given in
A. R. Conn, N. I. M. Gould, and Ph. L. Toint,
Trust-Region Methods, volume 1 of MPS/SIAM Series on Optimization.
SIAM, Philadelphia, USA, 2000.
DOI: 10.1137/1.9780898719857
The main algorithm follows the basic trust-region method described in Section 6. The backtracking linesearch follows Section 10.3.2. The nonmonotone strategy follows Section 10.1.3, Algorithm 10.1.2.
Examples
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
stats = trunk(nlp)
using JSOSolvers, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3))
solver = TrunkSolver(nlp)
stats = solve!(solver, nlp)