Reference
Contents
Index
- DCISolver.TR_solvers
- DCISolver.comp_λ_solvers
- DCISolver.solver_correspondence
- DCISolver.DCIWorkspace
- DCISolver.MetaDCI
- DCISolver.SymCOOSolver
- DCISolver.TR_dogleg_struct
- DCISolver.TR_lsmr_struct
- DCISolver.comp_λ_cgls
- Base.success
- DCISolver.TR_dogleg
- DCISolver.TR_lsmr
- DCISolver._compute_gradient_step!
- DCISolver._compute_newton_step!
- DCISolver._compute_step_length
- DCISolver.compute_descent_direction!
- DCISolver.compute_gBg
- DCISolver.compute_lx!
- DCISolver.compute_ρ
- DCISolver.dci
- DCISolver.factorize!
- DCISolver.feasibility_step
- DCISolver.normal_step!
- DCISolver.num_neg_eig
- DCISolver.regularized_coo_saddle_system!
- DCISolver.solve!
- DCISolver.tangent_step!
DCISolver.TR_solvers — ConstantTR_solvers = Dict(:TR_lsmr => TR_lsmr_struct, :TR_dogleg => TR_dogleg_struct)Dictonary of the possible structures for the trust-region step.
DCISolver.comp_λ_solvers — Constantcomp_λ_solvers = Dict(:cgls => comp_λ_cgls)Dictonary of the possible structures for the computation of the Lagrange multipliers.
DCISolver.solver_correspondence — Constantsolver_correspondence = Dict(:ma57 => MA57Struct, :ldlfact => LDLFactorizationStruct)Dictonary of the possible structures for the factorization.
DCISolver.DCIWorkspace — TypeDCIWorkspace(nlp, meta, x)Pre-allocate the memory used during the dci call. Returns a DCIWorkspace structure.
DCISolver.MetaDCI — TypeMetaDCI(x, y; kwargs...)Structure containing all the parameters used in the dci call. x is an intial guess, and y is an initial guess for the Lagrange multiplier. Returns a MetaDCI structure.
Arguments
The keyword arguments may include:
- atol::T=T(1e-5): absolute tolerance.
- rtol::T=T(1e-5): relative tolerance.
- ctol::T=T(1e-5): feasibility tolerance.
- unbounded_threshold::T=T(-1e5): below this threshold the problem is unbounded.
- max_eval::Integer=50000: maximum number of cons + obj evaluations.
- max_time::Float64=120.0: maximum number of seconds.
- max_iter::Integer=500: maximum number of iterations.
- max_iter_normal_step::Integer=typemax(Int): maximum number of iterations in normal step.
- λ_struct::comp_λ_cgls=comp_λ_cgls(length(x0), length(y0), S).
- linear_solver::Symbol=:ldlfact: Solver for the factorization. options:- :ma57.
- decrease_γ::T=T(0.1): Regularization for the factorization: reduce- γif possible,- > √eps(T), between tangent steps.
- increase_γ::T=T(100.0): Regularization for the factorization: up- γif possible,- < 1/√eps(T), during the factorization.
- δmin::T=√eps(T): Regularization for the factorization: smallest value of- δused for the regularization.
- feas_step::Symbol=:feasibility_step: Normal step.
- feas_η₁::T=T(1e-3): Feasibility step: decrease the trust-region radius when- Ared/Pred < η₁.
- feas_η₂::T=T(0.66): Feasibility step: increase the trust-region radius when- Ared/Pred > η₂.
- feas_σ₁::T=T(0.25): Feasibility step: decrease coefficient of the trust-region radius.
- feas_σ₂::T=T(2.0): Feasibility step: increase coefficient of the trust-region radius.
- feas_Δ₀::T=one(T): Feasibility step: initial radius.
- bad_steps_lim::Integer=3: Feasibility step: consecutive bad steps before using a second order step.
- feas_expected_decrease::T=T(0.95): Feasibility step: bad steps are when- ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
- TR_compute_step::Symbol=:TR_lsmr: Compute the direction in feasibility step: options:- :TR_dogleg.
- TR_struct::Union{TR_lsmr_struct, TR_dogleg_struct}=TR_lsmr_struct(length(x0), length(y0), S).
- compρ_p1::T=T(0.75): update ρ as- ρ = max(min(ngp, p1) * ρmax, ϵ).
- compρ_p2::T=T(0.90): update ρ as- ρ = primalnorm * p2if not sufficiently feasible.
- ρbar::T=T(2.0): radius of the larger cylinder is- ρbar * ρ.
- tan_Δ::T=one(T): Tangent step trust-region parameters: initial trust-region radius.
- tan_η₁::T=T(1e-2): Tangent step trust-region parameters: decrease the trust-region radius when- Ared/Pred < η₁.
- tan_η₂::T=T(0.75): Tangent step trust-region parameters: increase the trust-region radius when- Ared/Pred > η₂.
- tan_σ₁::T=T(0.25): Tangent step trust-region parameters: decrease coefficient of the trust-region radius.
- tan_σ₂::T=T(2.0): Tangent step trust-region parameters: increase coefficient of the trust-region radius.
- tan_small_d::T=eps(T): Tangent step trust-region parameters:- ||d||is too small.
- increase_Δtg::T=10: Tangent step trust-region parameters: increase if possible,- < 1 / √eps(T), the- Δtgbetween tangent steps.
For more details, we refer to the package documentation fine-tuneDCI.md.
DCISolver.SymCOOSolver — TypeAn SymCOOSolver is an interface to allow simple usage of different solvers. Ideally, having rows, cols, vals and the dimension ndim of a symmetric matrix should allow the user to call     M = LinearSolver(ndim, rows, cols, vals)     factorize!(M)     solve!(x, M, b) # Also x = M \ b Only the lower triangle of the matrix should be passed.
DCISolver.TR_dogleg_struct — Type`TR_dogleg_struct(m, n, ::DataType; kwargs...)`Keyword arguments correspond to input parameters of lsmr from Krylov.jl used in the computation of the dogleg for the trust-region step. Returns a TR_dogleg_struct structure.
DCISolver.TR_lsmr_struct — Type`TR_lsmr_struct(m, n, ::DataType; kwargs...)`Keyword arguments correspond to input parameters of lsmr from Krylov.jl used in the computation of the trust-region step. Returns a TR_lsmr_struct structure.
DCISolver.comp_λ_cgls — Type`comp_λ_cgls(m, n, ::DataType; kwargs...)`Keyword arguments correspond to input parameters of cgls from Krylov.jl used in the computation of the Lagrange multipliers. Returns a comp_λ_cgls structure.
Base.success — Functionsuccess(M :: SymCOOSolver)Returns whether factorize!(M) was successful.
DCISolver.TR_dogleg — MethodTR_dogleg(cz, Jz, ctol, Δ, normcz, Jd, meta)Compute a direction d such that
\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]
using a dogleg.
Output
- d: solution
- Jd: product of the solution with- J.
- infeasible:- trueif the problem is infeasible.
- solved:- trueif the problem has been successfully solved.
DCISolver.TR_lsmr — MethodTR_lsmr(cz, Jz, ctol, Δ, normcz, Jd, meta)Compute a direction d such that
\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]
using lsmr method from Krylov.jl.
Output
- d: solution
- Jd: product of the solution with- J.
- infeasible:- trueif the problem is infeasible.
- solved:- trueif the problem has been successfully solved.
DCISolver._compute_gradient_step! — Method_compute_gradient_step!(nlp, gBg, g, Δ, dcp)Solve the following problem
\[\begin{aligned} \min_{α} \quad & q(-α g) \\ \text{s.t.} \quad & \|αg\| \leq \Delta. \end{aligned}\]
Output
The returned arguments are:
- dcp_on_boundarytrue if- ‖αg‖ = Δ,
- dcp = - α g,
- dcpBdcp = α^2 gBg,
- αthe solution.
DCISolver._compute_newton_step! — Method_compute_newton_step!(nlp, LDL, g, γ, δ, dcp, vals, meta, workspace)Compute a Newton direction dn via a factorization of an SQD matrix.
Output
- dn: solution
- dnBdnand- dcpBdn: updated scalar products.
- γ_too_large:- trueif the regularization is too large.
- γ: updated value of the regularization- γ.
- δ: updated value of the regularization- δ.
- vals: updated value of the SQD matrix.
DCISolver._compute_step_length — Method_compute_step_length(norm2dn, dotdndcp, norm2dcp, Δ)Given two directions dcp and dn, compute the largest 0 ≤ τ ≤ 1 such that ‖dn + τ (dcp -dn)‖ = Δ. Returns τ.
DCISolver.compute_descent_direction! — Method(d, dBd, status, γ, δ, vals) = compute_descent_direction!(nlp::AbstractNLPModel, gBg, g, Δ, LDL, γ, δ, vals, d, meta, workspace)Compute a direction d solution of
\[\begin{aligned} \min_{d} \quad & q(d) \\ \text{s.t.} \quad & \|d\| \leq \Delta. \end{aligned}\]
Output
The returned arguments are:
- d: the computed direction.
- dBd: updated scalar product.
- status: computation status.
- γ,- δ: updated values for the regularization parameters.
- vals: update values for the SQD system.
Return status has four possible outcomes:
- :cauchy_step
- :newton
- :dogleg
- :interior_cauchy_stepwhen γ is too large.
DCISolver.compute_gBg — Methodcompute_gBg(nlp, rows, cols, vals, ∇ℓzλ)Compute gBg = ∇ℓxλ' * B * ∇ℓxλ, where B is a symmetric sparse matrix whose lower triangular is given in COO-format.
DCISolver.compute_lx! — Methodcompute_lx!(Jx, ∇fx, λ, meta)Compute the solution of ‖Jx' λ - ∇fx‖ using solvers from Krylov.jl as defined by meta.λ_struct.comp_λ_solver. Return the solution λ.
DCISolver.compute_ρ — Methodcompute_ρ(dualnorm, primalnorm, norm∇fx, ρmax, ϵ, iter, meta::MetaDCI)
compute_ρ(dualnorm, primalnorm, norm∇fx, ρmax, ϵ, iter, p1, p2)Update and return the trust-cylinder radius ρ.
Theory asks for ngp ρmax 10^-4 < ρ <= ngp ρmax There are no evaluations of functions here.
ρ = O(‖g_p(z)‖) and in the paper ρ = ν n_p(z) ρ_max with n_p(z) = norm(g_p(z)) / (norm(g(z)) + 1).
DCISolver.dci — Methoddci(nlp; kwargs...)
dci(nlp, x; kwargs...)
dci(nlp, meta, x)Compute a local minimum of an equality-constrained optimization problem using DCI algorithm from Bielschowsky & Gomes (2008).
Arguments
- nlp::AbstractNLPModel: the model solved, see- NLPModels.jl.
- x: Initial guess. If- xis not specified, then- nlp.meta.x0is used.
- meta: The keyword arguments are used to initialize a- MetaDCI.
For advanced usage, the principal call to the solver uses a DCIWorkspace.
dci(nlp, meta, workspace)Output
The returned value is a GenericExecutionStats, see SolverCore.jl.
References
This method implements the Dynamic Control of Infeasibility for equality-constrained problems described in
Dynamic Control of Infeasibility in Equality Constrained Optimization
Roberto H. Bielschowsky and Francisco A. M. Gomes
SIAM J. Optim., 19(3), 2008, 1299–1325.
https://doi.org/10.1137/070679557Examples
julia> using DCISolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0]);
julia> stats = dci(nlp)
"Execution stats: first-order stationary"DCISolver.factorize! — Functionfactorize!(M :: SymCOOSolver)Calls the factorization of the symmetric solver given by M. Use success(M) to check whether the factorization was successful.
DCISolver.feasibility_step — Methodfeasibility_step(nls, x, cx, normcx, Jx, ρ, ctol, meta, workspace; kwargs...)Approximately solves min ‖c(x)‖ using a trust-region Levenberg-Marquardt method, i.e. given xₖ, finds min ‖cₖ + Jₖd‖.
Arguments
- η₁::AbstractFloat = meta.feas_η₁: decrease the trust-region radius when Ared/Pred < η₁.
- η₂::AbstractFloat = meta.feas_η₂: increase the trust-region radius when Ared/Pred > η₂.
- σ₁::AbstractFloat = meta.feas_σ₁: decrease coefficient of the trust-region radius.
- σ₂::AbstractFloat = meta.feas_σ₂:increase coefficient of the trust-region radius.
- Δ₀::T = meta.feas_Δ₀: initial trust-region radius.
- bad_steps_lim::Integer = meta.bad_steps_lim: consecutive bad steps before using a second order step.
- expected_decrease::T = meta.feas_expected_decrease: bad steps are when- ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
- max_eval::Int = 1_000: maximum evaluations.
- max_time::AbstractFloat = 60.0: maximum time.
- max_feas_iter::Int = typemax(Int64): maximum number of iterations.
Output
- z,- cz,- normcz,- Jz: the new iterate, and updated evaluations.
- status: Computation status. Possible outcomes are:- :success,- max_eval,- max_time,- max_iter,- unknown_tired,- :infeasible,- :unknown.
DCISolver.normal_step! — Methodnormal_step!(nlp, x, cx, Jx, fx, ∇fx, λ, ℓxλ, ∇ℓxλ, dualnorm, primalnorm, ρmax, ϵp, max_eval, max_time, max_iter, meta, workspace)Normal step: find z such that ||h(z)|| ≤ ρ where `` is the trust-cylinder radius.
Output
- z,- cz,- fz,- ℓzλ,- ∇ℓzλ: the new iterate, and updated evaluations.
- ρ: updated trust-cylinder radius.
- primalnorm,- dualnorm: updated primal and dual feasibility norms.
- status: Computation status. The possible outcomes are:- :init_success,- :success,- :max_eval,- :max_time,- :max_iter,- :unknown_tired,- :infeasible,- :unknown.
DCISolver.num_neg_eig — Functionnum_neg_eig(M :: SymCOOSolver)Returns the number of negative eigenvalues of M.
DCISolver.regularized_coo_saddle_system! — Methodregularized_coo_saddle_system!(nlp, rows, cols, vals, γ = γ, δ = δ)Compute the structure for the saddle system [H + γI  [Jᵀ]; J -δI] in COO-format (rows, cols, vals) in the following order: H, J, γ, -δ,.
DCISolver.solve! — Functionsolve!(x, M :: SymCOOSolver, b)Solve the system $M x = b$. factorize!(M) should be called first.
DCISolver.tangent_step! — Method(z, cz, fz, status, Δ, Δℓ, γ, δ) = tangent_step!(nlp, z, λ, cz, normcz, fz, LDL, vals, g, ℓzλ, gBg, ρ, γ, δ, meta, workspace; kwargs...)Solve the following problem
\[\begin{aligned} \min_{d} \quad & q(d):=\frac{1}{2} d^T B d + d^T g \\ \text{s.t.} \quad & Ad = 0, & \|d\| \leq \Delta, \end{aligned}\]
where B is an approximation of the Hessian of the Lagrangian, A is the jacobian matrix, and g is the projected gradient.
Output
- z,- cz, and- fz: the new iterate, and updated evaluations.
- status: computation status.
- Δ: trust-region parameter.
- Δℓ: differential in Lagrangian computation.
- γ,- δ: updated values for the regularization parameters.
Return status with possible outcomes:
- :cauchy_step,- :newton,- :doglegif successful step.
- :unknownif we didn't enter the loop.
- :small_horizontal_step.
- :tiredif we stop due to- max_evalor- max_time.
- :successif we computed- zsuch that- ‖c(z)‖ ≤ meta.ρbar * ρand- Δℓ ≥ η₁ q(d).
See SolverTools.jl for SolverTools.aredpred