Reference

Index

RegularizedOptimization.ALSolverType
AL(reg_nlp; kwargs...)

An augmented Lagrangian method for constrained regularized optimization, namely problems in the form

minimize    f(x) + h(x)
subject to  lvar ≤ x ≤ uvar,
            lcon ≤ c(x) ≤ ucon

where f: ℝⁿ → ℝ, c: ℝⁿ → ℝᵐ and their derivatives are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

At each iteration, an iterate x is computed as an approximate solution of the subproblem

minimize    L(x;y,μ) + h(x)
subject to  lvar ≤ x ≤ uvar

where y is an estimate of the Lagrange multiplier vector for the constraints lcon ≤ c(x) ≤ ucon, μ is the penalty parameter and L(⋅;y,μ) is the augmented Lagrangian function defined by

L(x;y,μ) := f(x) - yᵀc(x) + ½ μ ‖c(x)‖².

For advanced usage, first define a solver "ALSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp)

stats = GenericExecutionStats(reg_nlp.model)
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel: a regularized optimization problem, see RegularizedProblems.jl, consisting of model representing a smooth optimization problem, see NLPModels.jl, and a regularizer h such as those defined in ProximalOperators.jl.

The objective and gradient of model will be accessed. The Hessian of model may be accessed or not, depending on the subsolver adopted. If adopted, the Hessian is accessed as an abstract operator and need not be the exact Hessian.

Keyword arguments

  • x::AbstractVector: a primal initial guess (default: reg_nlp.model.meta.x0)
  • y::AbstractVector: a dual initial guess (default: reg_nlp.model.meta.y0)
  • atol::T = √eps(T): absolute optimality tolerance;
  • ctol::T = atol: absolute feasibility tolerance;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • max_iter::Int = 10000: maximum number of iterations;
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • subsolver::AbstractOptimizationSolver = has_bounds(nlp) ? TR : R2: the procedure used to compute a step (e.g. PG, R2, TR or TRDH);
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver;
  • init_penalty::T = T(10): initial penalty parameter;
  • factor_penalty_up::T = T(2): multiplicative factor to increase the penalty parameter;
  • factor_primal_linear_improvement::T = T(3/4): fraction to declare sufficient improvement of feasibility;
  • init_subtol::T = T(0.1): initial subproblem tolerance;
  • factor_decrease_subtol::T = T(1/4): multiplicative factor to decrease the subproblem tolerance;
  • dual_safeguard = (nlp::AugLagModel) -> nothing: in-place function to modify, as needed, the dual estimate.

Output

  • stats::GenericExecutionStats: solution and other info, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in reg_nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.y: current dual estimate;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function.
source
RegularizedOptimization.LMModelType
LMModel(j_prod!, jt_prod, F, v, σ, xk)

Given the unconstrained optimization problem:

\[\min \tfrac{1}{2} \| F(x) \|^2,\]

this model represents the smooth LM subproblem:

\[\min_s \ \tfrac{1}{2} \| F(x) + J(x)s \|^2 + \tfrac{1}{2} σ \|s\|^2\]

where J is the Jacobian of F at xk, represented via matrix-free operations. j_prod!(xk, s, out) computes J(xk) * s, and jt_prod!(xk, r, out) computes J(xk)' * r.

σ > 0 is a regularization parameter and v is a vector of the same size as F(xk) used for intermediary computations.

source
RegularizedOptimization.R2NModelType
R2NModel(B, ∇f, v, σ, x0)

Given the unconstrained optimization problem:

\[\min f(x),\]

this model represents the smooth R2N subproblem:

\[\min_s \ ∇f^T s + \tfrac{1}{2} s^T B s + \tfrac{1}{2} σ \|s\|^2\]

where B is either an approximation of the Hessian of f or the Hessian itself and ∇f represents the gradient of f at x0. σ > 0 is a regularization parameter and v is a vector of the same size as x0 used for intermediary computations.

source
RegularizedOptimization.ALMethod
AL(reg_nlp; kwargs...)

An augmented Lagrangian method for constrained regularized optimization, namely problems in the form

minimize    f(x) + h(x)
subject to  lvar ≤ x ≤ uvar,
            lcon ≤ c(x) ≤ ucon

where f: ℝⁿ → ℝ, c: ℝⁿ → ℝᵐ and their derivatives are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

At each iteration, an iterate x is computed as an approximate solution of the subproblem

minimize    L(x;y,μ) + h(x)
subject to  lvar ≤ x ≤ uvar

where y is an estimate of the Lagrange multiplier vector for the constraints lcon ≤ c(x) ≤ ucon, μ is the penalty parameter and L(⋅;y,μ) is the augmented Lagrangian function defined by

L(x;y,μ) := f(x) - yᵀc(x) + ½ μ ‖c(x)‖².

For advanced usage, first define a solver "ALSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp)

stats = GenericExecutionStats(reg_nlp.model)
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel: a regularized optimization problem, see RegularizedProblems.jl, consisting of model representing a smooth optimization problem, see NLPModels.jl, and a regularizer h such as those defined in ProximalOperators.jl.

The objective and gradient of model will be accessed. The Hessian of model may be accessed or not, depending on the subsolver adopted. If adopted, the Hessian is accessed as an abstract operator and need not be the exact Hessian.

Keyword arguments

  • x::AbstractVector: a primal initial guess (default: reg_nlp.model.meta.x0)
  • y::AbstractVector: a dual initial guess (default: reg_nlp.model.meta.y0)
  • atol::T = √eps(T): absolute optimality tolerance;
  • ctol::T = atol: absolute feasibility tolerance;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • max_iter::Int = 10000: maximum number of iterations;
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • subsolver::AbstractOptimizationSolver = has_bounds(nlp) ? TR : R2: the procedure used to compute a step (e.g. PG, R2, TR or TRDH);
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver;
  • init_penalty::T = T(10): initial penalty parameter;
  • factor_penalty_up::T = T(2): multiplicative factor to increase the penalty parameter;
  • factor_primal_linear_improvement::T = T(3/4): fraction to declare sufficient improvement of feasibility;
  • init_subtol::T = T(0.1): initial subproblem tolerance;
  • factor_decrease_subtol::T = T(1/4): multiplicative factor to decrease the subproblem tolerance;
  • dual_safeguard = (nlp::AugLagModel) -> nothing: in-place function to modify, as needed, the dual estimate.

Output

  • stats::GenericExecutionStats: solution and other info, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in reg_nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.y: current dual estimate;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function.
source
RegularizedOptimization.LMMethod
LM(reg_nls; kwargs...)

A Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.

For advanced usage, first define a solver "LMSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = LMSolver(reg_nls; subsolver = R2Solver, m_monotone = 1)
solve!(solver, reg_nls)

stats = RegularizedExecutionStats(reg_nls)
solve!(solver, reg_nls, stats)

Arguments

  • reg_nls::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • nonlinear::Bool = true: whether the function F is nonlinear or not.
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • `neg_tol::T = zero(T): negative tolerance;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • σmin::T = eps(T): minimum value of the regularization parameter;
  • σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;
  • η1::T = √√eps(T): successful iteration threshold;
  • η2::T = T(0.9): very successful iteration threshold;
  • γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful;
  • θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model;
  • m_monotone::Int = 1: monotonicity parameter. By default, LM is monotone but the non-monotone variant will be used if m_monotone > 1;
  • subsolver = R2Solver: the solver used to solve the subproblems.

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source
RegularizedOptimization.LMTRMethod
LMTR(reg_nls; kwargs...)
LMTR(nls, h, χ, options; kwargs...)

A trust-region Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖₂² + ψ(s; x)  subject to  ‖s‖ ≤ Δ

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.

For advanced usage, first define a solver "LMSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = LMTRSolver(reg_nls; χ = NormLinf(one(T)), subsolver = R2Solver)
solve!(solver, reg_nls)

stats = RegularizedExecutionStats(reg_nls)
solve!(solver, reg_nls, stats)

Arguments

  • reg_nls::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • sub_atol::T = atol: subsolver absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • `neg_tol::T = zero(T): negative tolerance;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • Δk::T = eps(T): initial value of the trust-region radius;
  • η1::T = √√eps(T): successful iteration threshold;
  • η2::T = T(0.9): very successful iteration threshold;
  • γ::T = T(3): trust-region radius parameter multiplier, Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;
  • α::T = 1/eps(T): TODO
  • β::T = 1/eps(T): TODO
  • χ = NormLinf(1): norm used to define the trust-region;`
  • subsolver::S = R2Solver: subsolver used to solve the subproblem that appears at each iteration.

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source
RegularizedOptimization.R2Method
R2(reg_nlp; kwargs…)

A first-order quadratic regularization method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as a solution of

min  φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.

For advanced usage, first define a solver "R2Solver" to preallocate the memory used in the algorithm, and then call solve!:

solver = R2Solver(reg_nlp)
solve!(solver, reg_nlp)

stats = RegularizedExecutionStats(reg_nlp)
solver = R2Solver(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • neg_tol::T = eps(T)^(1 / 4): negative tolerance
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • σmin::T = eps(T): minimum value of the regularization parameter;
  • η1::T = √√eps(T): very successful iteration threshold;
  • η2::T = T(0.9): successful iteration threshold;
  • ν::T = eps(T)^(1 / 5): multiplicative inverse of the regularization parameter: ν = 1/σ;
  • γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful.

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source
RegularizedOptimization.R2DHMethod
R2DH(reg_nlp; kwargs…)

A second-order quadratic regularization method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ is C¹, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as a solution of

min  φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀDₖs is a diagonal quadratic approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.

For advanced usage, first define a solver R2DHSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = R2DHSolver(reg_nlp; m_monotone = 6)
solve!(solver, reg_nlp)

or

stats = RegularizedExecutionStats(reg_nlp)
solver = R2DHSolver(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • neg_tol::T = eps(T)^(1 / 4): negative tolerance
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • σmin::T = eps(T): minimum value of the regularization parameter;
  • σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;
  • η1::T = √√eps(T): very successful iteration threshold;
  • η2::T = T(0.9): successful iteration threshold;
  • γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful.
  • θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model.
  • m_monotone::Int = 6: monotoneness parameter. By default, R2DH is non-monotone but the monotone variant can be used with m_monotone = 1

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source
RegularizedOptimization.R2NMethod
R2N(reg_nlp; kwargs…)

A second-order quadratic regularization method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ is C¹, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as a solution of

min  φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀBₖs is a quadratic approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.

For advanced usage, first define a solver "R2NSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = R2NSolver(reg_nlp; m_monotone = 1)
solve!(solver, reg_nlp)

stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • neg_tol::T = eps(T)^(1 / 4): negative tolerance;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • σmin::T = eps(T): minimum value of the regularization parameter;
  • σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;
  • η1::T = √√eps(T): successful iteration threshold;
  • η2::T = T(0.9): very successful iteration threshold;
  • γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful;
  • θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model;
  • opnorm_maxiter::Int = 5: how many iterations of the power method to use to compute the operator norm of Bₖ. If a negative number is provided, then Arpack is used instead;
  • m_monotone::Int = 1: monotonicity parameter. By default, R2N is monotone but the non-monotone variant will be used if m_monotone > 1;
  • sub_kwargs::NamedTuple = NamedTuple(): a named tuple containing the keyword arguments to be sent to the subsolver. The solver will fail if invalid keyword arguments are provided to the subsolver. For example, if the subsolver is R2Solver, you can pass sub_kwargs = (max_iter = 100, σmin = 1e-6,).

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.

Similarly to the callback, when using a quasi-Newton approximation, two functions, qn_update_y!(nlp, solver, stats) and qn_copy!(nlp, solver, stats) are called at each update of the approximation. Namely, the former computes the y vector for which the pair (s, y) is pushed into the approximation. By default, y := ∇fk⁻ - ∇fk. The latter allows the user to tell which values should be copied for the next iteration. By default, only the gradient is copied: ∇fk⁻ .= ∇fk. This might be useful when using R2N in a constrained optimization context, when the gradient of the Lagrangian function is pushed at each iteration rather than the gradient of the objective function.

source
RegularizedOptimization.RegularizedExecutionStatsMethod
GenericExecutionStats(reg_nlp :: AbstractRegularizedNLPModel{T, V})

Construct a GenericExecutionStats object from an AbstractRegularizedNLPModel. More specifically, construct a GenericExecutionStats on the NLPModel of regnlp and add three solverspecific entries namely :smoothobj, :nonsmoothobj and :xi. This is useful for reducing the number of allocations when calling solve!(..., regnlp, stats) and should be used by default. Warning: This should not be used when adding other solverspecific entries that do not have the current scalar type.

source
RegularizedOptimization.TRMethod
TR(reg_nlp; kwargs…)
TR(nlp, h, χ, options; kwargs...)

A trust-region method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous and proper.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius.

For advanced usage, first define a solver "TRSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = TRSolver(reg_nlp; χ =  NormLinf(1), subsolver = R2Solver, m_monotone = 1)
solve!(solver, reg_nlp)

stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • neg_tol::T = eps(T)^(1 / 4): negative tolerance (see stopping conditions below);
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • Δk::T = T(1): initial value of the trust-region radius;
  • η1::T = √√eps(T): successful iteration threshold;
  • η2::T = T(0.9): very successful iteration threshold;
  • γ::T = T(3): trust-region radius parameter multiplier. Must satisfy γ > 1. The trust-region radius is updated as Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;
  • m_monotone::Int = 1: monotonicity parameter. By default, TR is monotone but the non-monotone variant will be used if m_monotone > 1;
  • opnorm_maxiter::Int = 5: how many iterations of the power method to use to compute the operator norm of Bₖ. If a negative number is provided, then Arpack is used instead;
  • χ::F = NormLinf(1): norm used to define the trust-region;`
  • subsolver::S = R2Solver: subsolver used to solve the subproblem that appears at each iteration.
  • sub_kwargs::NamedTuple = NamedTuple(): a named tuple containing the keyword arguments to be sent to the subsolver. The solver will fail if invalid keyword arguments are provided to the subsolver. For example, if the subsolver is R2Solver, you can pass sub_kwargs = (max_iter = 100, σmin = 1e-6,).

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source
RegularizedOptimization.TRDHMethod
TRDH(reg_nlp; kwargs…)
TRDH(nlp, h, χ, options; kwargs...)
TRDH(f, ∇f!, h, options, x0)

A trust-region method with diagonal Hessian approximation for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient,, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.

For advanced usage, first define a solver "TRDHSolver" to preallocate the memory used in the algorithm, and then call solve!:

solver = TRDH(reg_nlp; D = nothing, χ =  NormLinf(1))
solve!(solver, reg_nlp)

stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)

Arguments

  • reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, see RegularizedProblems.jl, NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = √eps(T): absolute tolerance;
  • rtol::T = √eps(T): relative tolerance;
  • neg_tol::T = eps(T)^(1 / 4): negative tolerance;
  • max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 10000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • Δk::T = T(1): initial value of the trust-region radius;
  • η1::T = √√eps(T): successful iteration threshold;
  • η2::T = T(0.9): very successful iteration threshold;
  • γ::T = T(3): trust-region radius parameter multiplier. Must satisfy γ > 1. The trust-region radius is updated as Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;
  • reduce_TR::Bool = true: see explanation on the stopping criterion below;
  • χ::F = NormLinf(1): norm used to define the trust-region;`
  • D::L = nothing: diagonal quasi-Newton approximation used for the model φ. If nothing is provided and reg_nlp.model is not a diagonal quasi-Newton approximation, a spectral gradient approximation is used.`

The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure. Alternatively, if reduce_TR = true, then ξₖ₁ := f(xₖ) + h(xₖ) - φ(sₖ₁; xₖ) - ψ(sₖ₁; xₖ) is used instead of ξₖ, where sₖ₁ is the Cauchy point.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.xk: current iterate;
  • solver.∇fk: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;
    • stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything other than :unknown will stop the algorithm, but you should use :user to properly indicate the intention;
    • stats.elapsed_time: elapsed time in seconds.
source