Reference
Contents
Index
Percival.AugLagModelPercival.PercivalSolverPercival.percivalPercival.reset_subproblem!Percival.update_cx!Percival.update_fxcx!Percival.update_y!Percival.update_μ!
Percival.AugLagModel — TypeAugLagModel(model, y, μ, x, fx, cx)Given a model
\[\min \ f(x) \quad s.t. \quad c(x) = 0, \quad l ≤ x ≤ u,\]
this new model represents the subproblem of the augmented Lagrangian method
\[\min \ f(x) - yᵀc(x) + \tfrac{1}{2} μ \|c(x)\|^2 \quad s.t. \quad l ≤ x ≤ u,\]
where y is an estimates of the Lagrange multiplier vector and μ is the penalty parameter.
In addition to keeping meta and counters as any NLPModel, an AugLagModel also stores
model: The internal model defining $f$, $c$ and the bounds,y: The multipliers estimate,μ: The penalty parameter,x: Reference to the last point at which the functionc(x)was computed,fx: Reference tof(x),cx: Reference toc(x),μc_y: storage for y - μ * cx,store_Jvandstore_JtJv: storage used inhprod!.
Use the functions update_cx!, update_y! and update_μ! to update these values.
Percival.PercivalSolver — Typepercival(nlp)A factorization-free augmented Lagrangian for nonlinear optimization.
For advanced usage, first define a PercivalSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = PercivalSolver(nlp)
solve!(solver, nlp)
stats = GenericExecutionStats(nlp)
solver = PercivalSolver(nlp)
solve!(solver, nlp, stats)Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = T(1e-8): absolute tolerance;rtol::T = T(1e-8): relative tolerance;ctol::T = atol > 0 ? atol : rtol: absolute tolerance on the feasibility;max_eval::Int = 100000: maximum number of evaluation of the objective function;max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 2000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;μ::Real = T(10.0): Starting value of the penalty parameter;η₀::T = T(0.5): Starting value for the contraints tolerance of the subproblem;ω₀::T = T(1): Starting value for relative tolerance of the subproblem;ω_min::T = sqrt(eps(T)): Smallest value for relative tolerance of the subproblem;α₁::T = T(9 // 10): $η = max(η / al_nlp.μ^α₁, ϵp)$ if $‖c(xᵏ)‖ ≤ η$;β₀::T = T(1): seeβ₁;β₁::T = T(1 // 10): $η = max(β₀ / al_nlp.μ^β₁, ϵp)$ if $‖c(xᵏ)‖ > η$;μ_up::T = T(10): Multiplicative factor ofμif not $‖c(xᵏ)‖ > η$;subsolver_logger::AbstractLogger = NullLogger(): logger passed totron;subsolver_max_iter = typemax(Int): maximum number of iterations for the subsolver;subsolver_max_cgiter = nlp.meta.nvar: subproblem's iteration limit for the subsolver;cgls_verbose::Int = 0: verbosity level inKrylov.cgls;inity::Bool = false: Iftruethe algorithm usesKrylov.cglsto compute an approximation, otherwise we usenlp.meta.y0;
other kwargs are passed to the subproblem solver.
The algorithm stops when $‖c(xᵏ)‖ ≤ ctol$ and $‖P∇L(xᵏ,λᵏ)‖ ≤ atol + rtol * ‖P∇L(x⁰,λ⁰)‖$ where $P∇L(x,λ) := Proj_{l,u}(x - ∇f(x) + ∇c(x)ᵀλ) - x$.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.x: current iterate;solver.gx: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.dual_feas: norm of current projected gradient of Lagrangian;stats.primal_feas: norm of the feasibility residual;stats.iter: current iteration counter;stats.objective: current objective function value;stats.multipliers: current estimate of Lagrange multiplier associated with the equality constraint;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:userto properly indicate the intention.stats.elapsed_time: elapsed time in seconds.
Examples
using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = percival(nlp)
# output
"Execution stats: first-order stationary"using Percival, ADNLPModels, SolverCore
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = GenericExecutionStats(nlp)
solver = PercivalSolver(nlp)
stats = solve!(solver, nlp, stats)
# output
"Execution stats: first-order stationary"Percival.percival — Methodpercival(nlp)A factorization-free augmented Lagrangian for nonlinear optimization.
For advanced usage, first define a PercivalSolver to preallocate the memory used in the algorithm, and then call solve!:
solver = PercivalSolver(nlp)
solve!(solver, nlp)
stats = GenericExecutionStats(nlp)
solver = PercivalSolver(nlp)
solve!(solver, nlp, stats)Arguments
nlp::AbstractNLPModel{T, V}is the model to solve, seeNLPModels.jl.
Keyword arguments
x::V = nlp.meta.x0: the initial guess;atol::T = T(1e-8): absolute tolerance;rtol::T = T(1e-8): relative tolerance;ctol::T = atol > 0 ? atol : rtol: absolute tolerance on the feasibility;max_eval::Int = 100000: maximum number of evaluation of the objective function;max_time::Float64 = 30.0: maximum time limit in seconds;max_iter::Int = 2000: maximum number of iterations;verbose::Int = 0: if > 0, display iteration details everyverboseiteration;μ::Real = T(10.0): Starting value of the penalty parameter;η₀::T = T(0.5): Starting value for the contraints tolerance of the subproblem;ω₀::T = T(1): Starting value for relative tolerance of the subproblem;ω_min::T = sqrt(eps(T)): Smallest value for relative tolerance of the subproblem;α₁::T = T(9 // 10): $η = max(η / al_nlp.μ^α₁, ϵp)$ if $‖c(xᵏ)‖ ≤ η$;β₀::T = T(1): seeβ₁;β₁::T = T(1 // 10): $η = max(β₀ / al_nlp.μ^β₁, ϵp)$ if $‖c(xᵏ)‖ > η$;μ_up::T = T(10): Multiplicative factor ofμif not $‖c(xᵏ)‖ > η$;subsolver_logger::AbstractLogger = NullLogger(): logger passed totron;subsolver_max_iter = typemax(Int): maximum number of iterations for the subsolver;subsolver_max_cgiter = nlp.meta.nvar: subproblem's iteration limit for the subsolver;cgls_verbose::Int = 0: verbosity level inKrylov.cgls;inity::Bool = false: Iftruethe algorithm usesKrylov.cglsto compute an approximation, otherwise we usenlp.meta.y0;
other kwargs are passed to the subproblem solver.
The algorithm stops when $‖c(xᵏ)‖ ≤ ctol$ and $‖P∇L(xᵏ,λᵏ)‖ ≤ atol + rtol * ‖P∇L(x⁰,λ⁰)‖$ where $P∇L(x,λ) := Proj_{l,u}(x - ∇f(x) + ∇c(x)ᵀλ) - x$.
Output
The value returned is a GenericExecutionStats, see SolverCore.jl.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:
solver.x: current iterate;solver.gx: current gradient;stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:stats.dual_feas: norm of current projected gradient of Lagrangian;stats.primal_feas: norm of the feasibility residual;stats.iter: current iteration counter;stats.objective: current objective function value;stats.multipliers: current estimate of Lagrange multiplier associated with the equality constraint;stats.status: current status of the algorithm. Should be:unknownunless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:userto properly indicate the intention.stats.elapsed_time: elapsed time in seconds.
Examples
using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = percival(nlp)
# output
"Execution stats: first-order stationary"using Percival, ADNLPModels, SolverCore
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = GenericExecutionStats(nlp)
solver = PercivalSolver(nlp)
stats = solve!(solver, nlp, stats)
# output
"Execution stats: first-order stationary"Percival.reset_subproblem! — Methodreset_subproblem!(solver::PercivalSolver{T, V}, model::AbstractNLPModel{T, V})Specialize SolverCore.reset! function to percival's context.
Percival.update_cx! — Methodupdate_cx!(nlp, x)Given an AugLagModel, if x != nlp.x, then updates the internal value nlp.cx calling cons on nlp.model, and reset nlp.fx to a NaN. Also updates nlp.μc_y.
Percival.update_fxcx! — Methodupdate_fxcx!(nlp, x)Given an AugLagModel, if x != nlp.x, then updates the internal value nlp.cx calling objcons on nlp.model. Also updates nlp.μc_y. Returns fx only.
Percival.update_y! — Methodupdate_y!(nlp)Given an AugLagModel, update nlp.y = -nlp.μc_y and updates nlp.μc_y accordingly.
Percival.update_μ! — Methodupdate_μ!(nlp, μ)Given an AugLagModel, updates nlp.μ = μ and nlp.μc_y accordingly.