Reference
Index
RegularizedOptimization.ALSolverRegularizedOptimization.LMModelRegularizedOptimization.R2NModelRegularizedOptimization.ALRegularizedOptimization.LMRegularizedOptimization.LMTRRegularizedOptimization.R2RegularizedOptimization.R2DHRegularizedOptimization.R2NRegularizedOptimization.RegularizedExecutionStatsRegularizedOptimization.TRRegularizedOptimization.TRDHRegularizedOptimization.project_y!
RegularizedOptimization.ALSolver — TypeAL(reg_nlp; kwargs...)An augmented Lagrangian method for constrained regularized optimization, namely problems in the form
minimize f(x) + h(x)
subject to lvar ≤ x ≤ uvar,
lcon ≤ c(x) ≤ uconwhere f: ℝⁿ → ℝ, c: ℝⁿ → ℝᵐ and their derivatives are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
At each iteration, an iterate x is computed as an approximate solution of the subproblem
minimize L(x;y,μ) + h(x)
subject to lvar ≤ x ≤ uvarwhere y is an estimate of the Lagrange multiplier vector for the constraints lcon ≤ c(x) ≤ ucon, μ is the penalty parameter and L(⋅;y,μ) is the augmented Lagrangian function defined by
L(x;y,μ) := f(x) - yᵀc(x) + ½ μ ‖c(x)‖².For advanced usage, first define a solver "ALSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp)
stats = GenericExecutionStats(reg_nlp.model)
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel: a regularized optimization problem, seeRegularizedProblems.jl, consisting ofmodelrepresenting a smooth optimization problem, seeNLPModels.jl, and a regularizerhsuch as those defined inProximalOperators.jl.
The objective and gradient of model will be accessed. The Hessian of model may be accessed or not, depending on the subsolver adopted. If adopted, the Hessian is accessed as an abstract operator and need not be the exact Hessian.
Keyword arguments
x::AbstractVector: a primal initial guess (default:reg_nlp.model.meta.x0)y::AbstractVector: a dual initial guess (default:reg_nlp.model.meta.y0)atol::T = √eps(T): absolute optimality tolerance;ctol::T = atol: absolute feasibility tolerance;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;max_iter::Int = 10000: maximum number of iterations;max_time::Float64 = 30.0: maximum time limit in seconds;max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);subsolver::AbstractOptimizationSolver = has_bounds(nlp) ? TR : R2: the procedure used to compute a step (e.g.PG,R2,TRorTRDH);subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver;init_penalty::T = T(10): initial penalty parameter;factor_penalty_up::T = T(2): multiplicative factor to increase the penalty parameter;factor_primal_linear_improvement::T = T(3/4): fraction to declare sufficient improvement of feasibility;init_subtol::T = T(0.1): initial subproblem tolerance;factor_decrease_subtol::T = T(1/4): multiplicative factor to decrease the subproblem tolerance;dual_safeguard = (nlp::AugLagModel) -> nothing: in-place function to modify, as needed, the dual estimate.
Output
stats::GenericExecutionStats: solution and other info, seeSolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in reg_nlp and solver. Notably, you can access, and modify, the following:
solver.x: current iterate;solver.y: current dual estimate;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function.
RegularizedOptimization.LMModel — TypeLMModel(j_prod!, jt_prod, F, v, σ, xk)Given the unconstrained optimization problem:
\[\min \tfrac{1}{2} \| F(x) \|^2,\]
this model represents the smooth LM subproblem:
\[\min_s \ \tfrac{1}{2} \| F(x) + J(x)s \|^2 + \tfrac{1}{2} σ \|s\|^2\]
where J is the Jacobian of F at xk, represented via matrix-free operations. j_prod!(xk, s, out) computes J(xk) * s, and jt_prod!(xk, r, out) computes J(xk)' * r.
σ > 0 is a regularization parameter and v is a vector of the same size as F(xk) used for intermediary computations.
RegularizedOptimization.R2NModel — TypeR2NModel(B, ∇f, v, σ, x0)Given the unconstrained optimization problem:
\[\min f(x),\]
this model represents the smooth R2N subproblem:
\[\min_s \ ∇f^T s + \tfrac{1}{2} s^T B s + \tfrac{1}{2} σ \|s\|^2\]
where B is either an approximation of the Hessian of f or the Hessian itself and ∇f represents the gradient of f at x0. σ > 0 is a regularization parameter and v is a vector of the same size as x0 used for intermediary computations.
RegularizedOptimization.AL — MethodAL(reg_nlp; kwargs...)An augmented Lagrangian method for constrained regularized optimization, namely problems in the form
minimize f(x) + h(x)
subject to lvar ≤ x ≤ uvar,
lcon ≤ c(x) ≤ uconwhere f: ℝⁿ → ℝ, c: ℝⁿ → ℝᵐ and their derivatives are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
At each iteration, an iterate x is computed as an approximate solution of the subproblem
minimize L(x;y,μ) + h(x)
subject to lvar ≤ x ≤ uvarwhere y is an estimate of the Lagrange multiplier vector for the constraints lcon ≤ c(x) ≤ ucon, μ is the penalty parameter and L(⋅;y,μ) is the augmented Lagrangian function defined by
L(x;y,μ) := f(x) - yᵀc(x) + ½ μ ‖c(x)‖².For advanced usage, first define a solver "ALSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp)
stats = GenericExecutionStats(reg_nlp.model)
solver = ALSolver(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel: a regularized optimization problem, seeRegularizedProblems.jl, consisting ofmodelrepresenting a smooth optimization problem, seeNLPModels.jl, and a regularizerhsuch as those defined inProximalOperators.jl.
The objective and gradient of model will be accessed. The Hessian of model may be accessed or not, depending on the subsolver adopted. If adopted, the Hessian is accessed as an abstract operator and need not be the exact Hessian.
Keyword arguments
x::AbstractVector: a primal initial guess (default:reg_nlp.model.meta.x0)y::AbstractVector: a dual initial guess (default:reg_nlp.model.meta.y0)atol::T = √eps(T): absolute optimality tolerance;ctol::T = atol: absolute feasibility tolerance;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;max_iter::Int = 10000: maximum number of iterations;max_time::Float64 = 30.0: maximum time limit in seconds;max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);subsolver::AbstractOptimizationSolver = has_bounds(nlp) ? TR : R2: the procedure used to compute a step (e.g.PG,R2,TRorTRDH);subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver;init_penalty::T = T(10): initial penalty parameter;factor_penalty_up::T = T(2): multiplicative factor to increase the penalty parameter;factor_primal_linear_improvement::T = T(3/4): fraction to declare sufficient improvement of feasibility;init_subtol::T = T(0.1): initial subproblem tolerance;factor_decrease_subtol::T = T(1/4): multiplicative factor to decrease the subproblem tolerance;dual_safeguard = (nlp::AugLagModel) -> nothing: in-place function to modify, as needed, the dual estimate.
Output
stats::GenericExecutionStats: solution and other info, seeSolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in reg_nlp and solver. Notably, you can access, and modify, the following:
solver.x: current iterate;solver.y: current dual estimate;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function.
RegularizedOptimization.LM — MethodLM(reg_nls; kwargs...)A Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.
For advanced usage, first define a solver "LMSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = LMSolver(reg_nls; subsolver = R2Solver, m_monotone = 1)
solve!(solver, reg_nls)
stats = RegularizedExecutionStats(reg_nls)
solve!(solver, reg_nls, stats)Arguments
reg_nls::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;nonlinear::Bool = true: whether the functionFis nonlinear or not.atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;- `neg_tol::T = zero(T): negative tolerance;
max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;σmin::T = eps(T): minimum value of the regularization parameter;σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;η1::T = √√eps(T): successful iteration threshold;η2::T = T(0.9): very successful iteration threshold;γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful;θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model;m_monotone::Int = 1: monotonicity parameter. By default, LM is monotone but the non-monotone variant will be used ifm_monotone > 1;subsolver = R2Solver: the solver used to solve the subproblems.
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.LMTR — MethodLMTR(reg_nls; kwargs...)
LMTR(nls, h, χ, options; kwargs...)A trust-region Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖₂² + ψ(s; x) subject to ‖s‖ ≤ Δwhere F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.
For advanced usage, first define a solver "LMSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = LMTRSolver(reg_nls; χ = NormLinf(one(T)), subsolver = R2Solver)
solve!(solver, reg_nls)
stats = RegularizedExecutionStats(reg_nls)
solve!(solver, reg_nls, stats)Arguments
reg_nls::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;sub_atol::T = atol: subsolver absolute tolerance;rtol::T = √eps(T): relative tolerance;- `neg_tol::T = zero(T): negative tolerance;
max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;Δk::T = eps(T): initial value of the trust-region radius;η1::T = √√eps(T): successful iteration threshold;η2::T = T(0.9): very successful iteration threshold;γ::T = T(3): trust-region radius parameter multiplier, Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;α::T = 1/eps(T): TODOβ::T = 1/eps(T): TODOχ = NormLinf(1): norm used to define the trust-region;`subsolver::S = R2Solver: subsolver used to solve the subproblem that appears at each iteration.
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.R2 — MethodR2(reg_nlp; kwargs…)A first-order quadratic regularization method for the problem
min f(x) + h(x)where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as a solution of
min φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.
For advanced usage, first define a solver "R2Solver" to preallocate the memory used in the algorithm, and then call solve!:
solver = R2Solver(reg_nlp)
solve!(solver, reg_nlp)
stats = RegularizedExecutionStats(reg_nlp)
solver = R2Solver(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;neg_tol::T = eps(T)^(1 / 4): negative tolerancemax_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;σmin::T = eps(T): minimum value of the regularization parameter;η1::T = √√eps(T): very successful iteration threshold;η2::T = T(0.9): successful iteration threshold;ν::T = eps(T)^(1 / 5): multiplicative inverse of the regularization parameter: ν = 1/σ;γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful.
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.R2DH — MethodR2DH(reg_nlp; kwargs…)A second-order quadratic regularization method for the problem
min f(x) + h(x)where f: ℝⁿ → ℝ is C¹, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as a solution of
min φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀDₖs is a diagonal quadratic approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.
For advanced usage, first define a solver R2DHSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = R2DHSolver(reg_nlp; m_monotone = 6)
solve!(solver, reg_nlp)or
stats = RegularizedExecutionStats(reg_nlp)
solver = R2DHSolver(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;neg_tol::T = eps(T)^(1 / 4): negative tolerancemax_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;σmin::T = eps(T): minimum value of the regularization parameter;σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;η1::T = √√eps(T): very successful iteration threshold;η2::T = T(0.9): successful iteration threshold;γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful.θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model.m_monotone::Int = 6: monotoneness parameter. By default, R2DH is non-monotone but the monotone variant can be used withm_monotone = 1
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.R2N — MethodR2N(reg_nlp; kwargs…)A second-order quadratic regularization method for the problem
min f(x) + h(x)where f: ℝⁿ → ℝ is C¹, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as a solution of
min φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀBₖs is a quadratic approximation of f about xₖ, ψ(s; xₖ) is either h(xₖ + s) or an approximation of h(xₖ + s), ‖⋅‖ is the ℓ₂ norm and σₖ > 0 is the regularization parameter.
For advanced usage, first define a solver "R2NSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = R2NSolver(reg_nlp; m_monotone = 1)
solve!(solver, reg_nlp)
stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;neg_tol::T = eps(T)^(1 / 4): negative tolerance;max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;σmin::T = eps(T): minimum value of the regularization parameter;σk::T = eps(T)^(1 / 5): initial value of the regularization parameter;η1::T = √√eps(T): successful iteration threshold;η2::T = T(0.9): very successful iteration threshold;γ::T = T(3): regularization parameter multiplier, σ := σ/γ when the iteration is very successful and σ := σγ when the iteration is unsuccessful;θ::T = 1/(1 + eps(T)^(1 / 5)): is the model decrease fraction with respect to the decrease of the Cauchy model;opnorm_maxiter::Int = 5: how many iterations of the power method to use to compute the operator norm of Bₖ. If a negative number is provided, then Arpack is used instead;m_monotone::Int = 1: monotonicity parameter. By default, R2N is monotone but the non-monotone variant will be used ifm_monotone > 1;sub_kwargs::NamedTuple = NamedTuple(): a named tuple containing the keyword arguments to be sent to the subsolver. The solver will fail if invalid keyword arguments are provided to the subsolver. For example, if the subsolver isR2Solver, you can passsub_kwargs = (max_iter = 100, σmin = 1e-6,).
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
Similarly to the callback, when using a quasi-Newton approximation, two functions, qn_update_y!(nlp, solver, stats) and qn_copy!(nlp, solver, stats) are called at each update of the approximation. Namely, the former computes the y vector for which the pair (s, y) is pushed into the approximation. By default, y := ∇fk⁻ - ∇fk. The latter allows the user to tell which values should be copied for the next iteration. By default, only the gradient is copied: ∇fk⁻ .= ∇fk. This might be useful when using R2N in a constrained optimization context, when the gradient of the Lagrangian function is pushed at each iteration rather than the gradient of the objective function.
RegularizedOptimization.RegularizedExecutionStats — MethodGenericExecutionStats(reg_nlp :: AbstractRegularizedNLPModel{T, V})Construct a GenericExecutionStats object from an AbstractRegularizedNLPModel. More specifically, construct a GenericExecutionStats on the NLPModel of regnlp and add three solverspecific entries namely :smoothobj, :nonsmoothobj and :xi. This is useful for reducing the number of allocations when calling solve!(..., regnlp, stats) and should be used by default. Warning: This should not be used when adding other solverspecific entries that do not have the current scalar type.
RegularizedOptimization.TR — MethodTR(reg_nlp; kwargs…)
TR(nlp, h, χ, options; kwargs...)A trust-region method for the problem
min f(x) + h(x)where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous and proper.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius.
For advanced usage, first define a solver "TRSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = TRSolver(reg_nlp; χ = NormLinf(1), subsolver = R2Solver, m_monotone = 1)
solve!(solver, reg_nlp)
stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;neg_tol::T = eps(T)^(1 / 4): negative tolerance (see stopping conditions below);max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;Δk::T = T(1): initial value of the trust-region radius;η1::T = √√eps(T): successful iteration threshold;η2::T = T(0.9): very successful iteration threshold;γ::T = T(3): trust-region radius parameter multiplier. Must satisfyγ > 1. The trust-region radius is updated as Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;m_monotone::Int = 1: monotonicity parameter. By default, TR is monotone but the non-monotone variant will be used ifm_monotone > 1;opnorm_maxiter::Int = 5: how many iterations of the power method to use to compute the operator norm of Bₖ. If a negative number is provided, then Arpack is used instead;χ::F = NormLinf(1): norm used to define the trust-region;`subsolver::S = R2Solver: subsolver used to solve the subproblem that appears at each iteration.sub_kwargs::NamedTuple = NamedTuple(): a named tuple containing the keyword arguments to be sent to the subsolver. The solver will fail if invalid keyword arguments are provided to the subsolver. For example, if the subsolver isR2Solver, you can passsub_kwargs = (max_iter = 100, σmin = 1e-6,).
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.TRDH — MethodTRDH(reg_nlp; kwargs…)
TRDH(nlp, h, χ, options; kwargs...)
TRDH(f, ∇f!, h, options, x0)A trust-region method with diagonal Hessian approximation for the problem
min f(x) + h(x)where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient,, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.
For advanced usage, first define a solver "TRDHSolver" to preallocate the memory used in the algorithm, and then call solve!:
solver = TRDH(reg_nlp; D = nothing, χ = NormLinf(1))
solve!(solver, reg_nlp)
stats = RegularizedExecutionStats(reg_nlp)
solve!(solver, reg_nlp, stats)Arguments
reg_nlp::AbstractRegularizedNLPModel{T, V}: the problem to solve, seeRegularizedProblems.jl,NLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = √eps(T): absolute tolerance;rtol::T = √eps(T): relative tolerance;neg_tol::T = eps(T)^(1 / 4): negative tolerance;max_eval::Int = -1: maximum number of evaluation of the objective function (negative number means unlimited);max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 10000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;Δk::T = T(1): initial value of the trust-region radius;η1::T = √√eps(T): successful iteration threshold;η2::T = T(0.9): very successful iteration threshold;γ::T = T(3): trust-region radius parameter multiplier. Must satisfyγ > 1. The trust-region radius is updated as Δ := Δ*γ when the iteration is very successful and Δ := Δ/γ when the iteration is unsuccessful;reduce_TR::Bool = true: see explanation on the stopping criterion below;χ::F = NormLinf(1): norm used to define the trust-region;`D::L = nothing: diagonal quasi-Newton approximation used for the model φ. If nothing is provided andreg_nlp.modelis not a diagonal quasi-Newton approximation, a spectral gradient approximation is used.`
The algorithm stops either when √(ξₖ/νₖ) < atol + rtol*√(ξ₀/ν₀) or ξₖ < 0 and √(-ξₖ/νₖ) < neg_tol where ξₖ := f(xₖ) + h(xₖ) - φ(sₖ; xₖ) - ψ(sₖ; xₖ), and √(ξₖ/νₖ) is a stationarity measure. Alternatively, if reduce_TR = true, then ξₖ₁ := f(xₖ) + h(xₖ) - φ(sₖ₁; xₖ) - ψ(sₖ₁; xₖ) is used instead of ξₖ, where sₖ₁ is the Cauchy point.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.xk: current iterate;solver.∇fk: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.iter: current iteration counter;stats.objective: current objective function value;stats.solver_specific[:smooth_obj]: current value of the smooth part of the objective function;stats.solver_specific[:nonsmooth_obj]: current value of the nonsmooth part of the objective function;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything other than:unknownwill stop the algorithm, but you should use:userto properly indicate the intention;stats.elapsed_time: elapsed time in seconds.
RegularizedOptimization.project_y! — Methodproject_y!(nlp)Given an AugLagModel, project nlp.y into [ymin, ymax] and updates nlp.μc_y accordingly.