Reference

Contents

Index

NLPModels.CountersType
Counters

Struct for storing the number of function evaluations.


Counters()

Creates an empty Counters struct.

source
NLPModels.DimensionErrorType
DimensionError <: Exception
DimensionError(name, dim_expected, dim_found)

Error for unexpected dimension. Output: "DimensionError: Input name should have length dim_expected not dim_found"

source
NLPModels.NLPModelMetaType
NLPModelMeta <: AbstractNLPModelMeta

A composite type that represents the main features of the optimization problem

optimize    obj(x)
subject to  lvar ≤    x    ≤ uvar
            lcon ≤ cons(x) ≤ ucon

where x is an nvar-dimensional vector, obj is the real-valued objective function, cons is the vector-valued constraint function, optimize is either "minimize" or "maximize".

Here, lvar, uvar, lcon and ucon are vectors. Some of their components may be infinite to indicate that the corresponding bound or general constraint is not present.


NLPModelMeta(nvar::Integer; kwargs...)
NLPModelMeta(meta::AbstractNLPModelMeta; kwargs...)

Create an NLPModelMeta with nvar variables. Alternatively, create an NLPModelMeta copy from another AbstractNLPModelMeta. The following keyword arguments are accepted:

  • x0: initial guess
  • lvar: vector of lower bounds
  • uvar: vector of upper bounds
  • nlvb: number of nonlinear variables in both objectives and constraints
  • nlvo: number of nonlinear variables in objectives (includes nlvb)
  • nlvc: number of nonlinear variables in constraints (includes nlvb)
  • ncon: number of general constraints
  • y0: initial Lagrange multipliers
  • lcon: vector of constraint lower bounds
  • ucon: vector of constraint upper bounds
  • nnzo: number of nonzeros in the gradient
  • nnzj: number of elements needed to store the nonzeros in the sparse Jacobian
  • lin_nnzj: number of elements needed to store the nonzeros in the sparse Jacobian of linear constraints
  • nln_nnzj: number of elements needed to store the nonzeros in the sparse Jacobian of nonlinear constraints
  • nnzh: number of elements needed to store the nonzeros in the sparse Hessian
  • lin: indices of linear constraints
  • minimize: true if optimize == minimize
  • islp: true if the problem is a linear program
  • name: problem name

NLPModelMeta also contains the following attributes, which are computed from the variables above:

  • nvar: number of variables
  • ifix: indices of fixed variables
  • ilow: indices of variables with lower bound only
  • iupp: indices of variables with upper bound only
  • irng: indices of variables with lower and upper bound (range)
  • ifree: indices of free variables
  • iinf: indices of visibly infeasible bounds
  • jfix: indices of equality constraints
  • jlow: indices of constraints of the form c(x) ≥ cl
  • jupp: indices of constraints of the form c(x) ≤ cu
  • jrng: indices of constraints of the form cl ≤ c(x) ≤ cu
  • jfree: indices of "free" constraints (there shouldn't be any)
  • jinf: indices of the visibly infeasible constraints
  • nlin: number of linear constraints
  • nnln: number of nonlinear general constraints
  • nln: indices of nonlinear constraints
source
NLPModels.NLSCountersType
NLSCounters

Struct for storing the number of functions evaluations for nonlinear least-squares models. NLSCounters also stores a Counters instance named counters.


NLSCounters()

Creates an empty NLSCounters struct.

source
NLPModels.NLSMetaType
NLSMeta

Base type for metadata related to a nonlinear least-squares model.


NLSMeta(nequ, nvar; kwargs...)

Create a NLSMeta with nequ equations and nvar variables. The following keyword arguments are accepted:

  • x0: initial guess
  • nnzj: number of elements needed to store the nonzeros of the Jacobian of the residual
  • nnzh: number of elements needed to store the nonzeros of the sum of Hessians of the residuals
  • lin: indices of linear residuals

NLSMeta also contains the following attributes, which are computed from the variables above:

  • nequ: size of the residual
  • nvar: number of variables
  • nln: indices of nonlinear residuals
  • nnln: number of nonlinear general residuals
  • nlin: number of linear residuals
source
NLPModels.bound_constrainedMethod
bound_constrained(nlp)
bound_constrained(meta)

Returns whether the problem has bounds on the variables and no other constraints.

source
NLPModels.coo_prod!Method
coo_prod!(rows, cols, vals, v, Av)

Compute the product of a matrix A given by (rows, cols, vals) and the vector v. The result is stored in Av, which should have length equals to the number of rows of A.

source
NLPModels.coo_sym_prod!Method
coo_sym_prod!(rows, cols, vals, v, Av)

Compute the product of a symmetric matrix A given by (rows, cols, vals) and the vector v. The result is stored in Av, which should have length equals to the number of rows of A. Only one triangle of A should be passed.

source
NLPModels.equality_constrainedMethod
equality_constrained(nlp)
equality_constrained(meta)

Returns whether the problem's constraints are all equalities. Unconstrained problems return false.

source
NLPModels.get_linMethod
get_lin(nls)
get_lin(nls_meta)

Return the value lin from nls_meta or nls.nls_meta.

source
NLPModels.get_nlnMethod
get_nln(nls)
get_nln(nls_meta)

Return the value nln from nls_meta or nls.nls_meta.

source
NLPModels.get_x0Method
get_x0(nls)
get_x0(nls_meta)

Return the value x0 from nls_meta or nls.nls_meta.

source
NLPModels.ghjvprod!Function

ghjvprod!(nlp, x, g, v, gHv)

Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v in place.

source
NLPModels.ghjvprodMethod

gHv = ghjvprod(nlp, x, g, v)

Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v.

source
NLPModels.grad!Function
g = grad!(nlp, x, g)

Evaluate $∇f(x)$, the gradient of the objective function at x in place.

source
NLPModels.grad!Method
g = grad!(nls, x, g)
g = grad!(nls, x, g, Fx; recompute::Bool=true)

Evaluate ∇f(x), the gradient of the objective function of nls::AbstractNLSModel at x in place. Fx is overwritten with the value of the residual F(x). If recompute is true, then Fx is updated with the residual at x.

source
NLPModels.gradMethod
g = grad(nlp, x)

Evaluate $∇f(x)$, the gradient of the objective function at x.

source
NLPModels.has_equalitiesMethod
has_equalities(nlp)

Returns whether the problem has constraints and at least one of them is an equality. Unconstrained problems return false.

source
NLPModels.has_inequalitiesMethod
has_inequalities(nlp)

Returns whether the problem has constraints and at least one of them is an inequality. Unconstrained problems return false.

source
NLPModels.hessMethod
Hx = hess(nlp, x, y; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) as a sparse matrix, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight . A Symmetric object wrapping the lower triangle is returned.

source
NLPModels.hessMethod
Hx = hess(nlp, x; obj_weight=1.0)

Evaluate the objective Hessian at x as a sparse matrix, with objective function scaled by obj_weight, i.e.,

\[σ ∇²f(x),\]

with σ = obj_weight . A Symmetric object wrapping the lower triangle is returned.

source
NLPModels.hess_coord!Function
vals = hess_coord!(nlp, x, y, vals; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight , overwriting vals. Only the lower triangle is returned.

source
NLPModels.hess_coord!Method
vals = hess_coord!(nlp, x, vals; obj_weight=1.0)

Evaluate the objective Hessian at x in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[σ ∇²f(x),\]

with σ = obj_weight , overwriting vals. Only the lower triangle is returned.

source
NLPModels.hess_coordMethod
vals = hess_coord(nlp, x, y; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
NLPModels.hess_coordMethod
vals = hess_coord(nlp, x; obj_weight=1.0)

Evaluate the objective Hessian at x in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[σ ∇²f(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
NLPModels.hess_coord_residual!Function
vals = hess_coord_residual!(nls, x, v, vals)

Computes the linear combination of the Hessians of the residuals at x with coefficients v in sparse coordinate format, rewriting vals.

source
NLPModels.hess_coord_residualMethod
vals = hess_coord_residual(nls, x, v)

Computes the linear combination of the Hessians of the residuals at x with coefficients v in sparse coordinate format.

source
NLPModels.hess_op!Method
H = hess_op!(nlp, x, y, Hv; obj_weight=1.0)

Return the Lagrangian Hessian at (x,y) with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hess_op!Method
H = hess_op!(nlp, x, Hv; obj_weight=1.0)

Return the objective Hessian at x with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hess_op!Method
H = hess_op!(nlp, rows, cols, vals, Hv)

Return the Hessian given by (rows, cols, vals) as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hess_opMethod
H = hess_op(nlp, x, y; obj_weight=1.0)

Return the Lagrangian Hessian at (x,y) with objective function scaled by obj_weight as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v. The linear operator H represents

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hess_opMethod
H = hess_op(nlp, x; obj_weight=1.0)

Return the objective Hessian at x with objective function scaled by obj_weight as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hess_op_residual!Method
Hop = hess_op_residual!(nls, x, i, Hiv)

Computes the Hessian of the i-th residual at x, in linear operator form. The vector Hiv is used as preallocated storage for the operation.

source
NLPModels.hess_residualMethod
H = hess_residual(nls, x, v)

Computes the linear combination of the Hessians of the residuals at x with coefficients v. A Symmetric object wrapping the lower triangle is returned.

source
NLPModels.hess_structure!Function
hess_structure!(nlp, rows, cols)

Return the structure of the Lagrangian Hessian in sparse coordinate format in place.

source
NLPModels.histlineMethod
histline(s, v, maxv)

Return a string of the form

______NAME______: ████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5

where:

  • ______NAME______ is s with padding to the left and length 16.
  • And the symbols █ and ⋅ fill 20 characters in the proportion of v / maxv to █ and the rest to ⋅.
  • The number 5 is v.
source
NLPModels.hprod!Function
Hv = hprod!(nlp, x, y, v, Hv; obj_weight=1.0)

Evaluate the product of the Lagrangian Hessian at (x,y) with the vector v in place, with objective function scaled by obj_weight, where the Lagrangian Hessian is

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hprod!Method
Hv = hprod!(nlp, rows, cols, vals, v, Hv)

Evaluate the product of the objective or Lagrangian Hessian given by (rows, cols, vals) in triplet format with the vector v in place. Only one triangle of the Hessian should be given.

source
NLPModels.hprod!Method
Hv = hprod!(nlp, x, v, Hv; obj_weight=1.0)

Evaluate the product of the objective Hessian at x with the vector v in place, with objective function scaled by obj_weight, where the objective Hessian is

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hprodMethod
Hv = hprod(nlp, x, y, v; obj_weight=1.0)

Evaluate the product of the Lagrangian Hessian at (x,y) with the vector v, with objective function scaled by obj_weight, where the Lagrangian Hessian is

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hprodMethod
Hv = hprod(nlp, x, v; obj_weight=1.0)

Evaluate the product of the objective Hessian at x with the vector v, with objective function scaled by obj_weight, where the objective Hessian is

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hprod_residual!Function
Hiv = hprod_residual!(nls, x, i, v, Hiv)

Computes the product of the Hessian of the i-th residual at x, times the vector v, and stores it in vector Hiv.

source
NLPModels.hprod_residualMethod
Hiv = hprod_residual(nls, x, i, v)

Computes the product of the Hessian of the i-th residual at x, times the vector v.

source
NLPModels.inequality_constrainedMethod
inequality_constrained(nlp)
inequality_constrained(meta)

Returns whether the problem's constraints are all inequalities. Unconstrained problems return true.

source
NLPModels.jacMethod
Jx = jac(nlp, x)

Evaluate $J(x)$, the constraints Jacobian at x as a sparse matrix.

source
NLPModels.jac_coord!Method
vals = jac_coord!(nlp, x, vals)

Evaluate $J(x)$, the constraints Jacobian at x in sparse coordinate format, rewriting vals.

source
NLPModels.jac_coordMethod
vals = jac_coord(nlp, x)

Evaluate $J(x)$, the constraints Jacobian at x in sparse coordinate format.

source
NLPModels.jac_coord_residual!Function
vals = jac_coord_residual!(nls, x, vals)

Computes the Jacobian of the residual at x in sparse coordinate format, rewriting vals. rows and cols are not rewritten.

source
NLPModels.jac_linMethod
Jx = jac_lin(nlp, x)

Evaluate $J(x)$, the linear constraints Jacobian at x as a sparse matrix.

source
NLPModels.jac_lin_coord!Function
vals = jac_lin_coord!(nlp, x, vals)

Evaluate $J(x)$, the linear constraints Jacobian at x in sparse coordinate format, overwriting vals.

source
NLPModels.jac_lin_coordMethod
vals = jac_lin_coord(nlp, x)

Evaluate $J(x)$, the linear constraints Jacobian at x in sparse coordinate format.

source
NLPModels.jac_lin_op!Method
J = jac_lin_op!(nlp, rows, cols, vals, Jv, Jtv)

Return the linear Jacobian given by (rows, cols, vals) as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_lin_op!Method
J = jac_lin_op!(nlp, x, Jv, Jtv)

Return the linear Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_lin_opMethod
J = jac_lin_op(nlp, x)

Return the linear Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v.

source
NLPModels.jac_lin_structure!Function
jac_lin_structure!(nlp, rows, cols)

Return the structure of the linear constraints Jacobian in sparse coordinate format in place.

source
NLPModels.jac_nlnMethod
Jx = jac_nln(nlp, x)

Evaluate $J(x)$, the nonlinear constraints Jacobian at x as a sparse matrix.

source
NLPModels.jac_nln_coord!Function
vals = jac_nln_coord!(nlp, x, vals)

Evaluate $J(x)$, the nonlinear constraints Jacobian at x in sparse coordinate format, overwriting vals.

source
NLPModels.jac_nln_coordMethod
vals = jac_nln_coord(nlp, x)

Evaluate $J(x)$, the nonlinear constraints Jacobian at x in sparse coordinate format.

source
NLPModels.jac_nln_op!Method
J = jac_nln_op!(nlp, rows, cols, vals, Jv, Jtv)

Return the nonlinear Jacobian given by (rows, cols, vals) as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_nln_op!Method
J = jac_nln_op!(nlp, x, Jv, Jtv)

Return the nonlinear Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_nln_opMethod
J = jac_nln_op(nlp, x)

Return the nonlinear Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v.

source
NLPModels.jac_nln_structure!Function
jac_nln_structure!(nlp, rows, cols)

Return the structure of the nonlinear constraints Jacobian in sparse coordinate format in place.

source
NLPModels.jac_nln_structureMethod
(rows,cols) = jac_nln_structure(nlp)

Return the structure of the nonlinear constraints Jacobian in sparse coordinate format.

source
NLPModels.jac_op!Method
J = jac_op!(nlp, rows, cols, vals, Jv, Jtv)

Return the Jacobian given by (rows, cols, vals) as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_op!Method
J = jac_op!(nlp, x, Jv, Jtv)

Return the Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_opMethod
J = jac_op(nlp, x)

Return the Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v.

source
NLPModels.jac_op_residual!Method
Jx = jac_op_residual!(nls, x, Jv, Jtv)

Computes $J(x)$, the Jacobian of the residual at x, in linear operator form. The vectors Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_op_residual!Method
Jx = jac_op_residual!(nls, rows, cols, vals, Jv, Jtv)

Computes $J(x)$, the Jacobian of the residual given by (rows, cols, vals), in linear operator form. The vectors Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jac_structure!Method
jac_structure!(nlp, rows, cols)

Return the structure of the constraints Jacobian in sparse coordinate format in place.

source
NLPModels.jac_structureMethod
(rows,cols) = jac_structure(nlp)

Return the structure of the constraints Jacobian in sparse coordinate format.

source
NLPModels.jprod!Method
Jv = jprod!(nlp, x, v, Jv)

Evaluate $J(x)v$, the Jacobian-vector product at x in place.

source
NLPModels.jprod!Method
Jv = jprod!(nlp, rows, cols, vals, v, Jv)

Evaluate $J(x)v$, the Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jprod_lin!Function
Jv = jprod_lin!(nlp, x, v, Jv)

Evaluate $J(x)v$, the linear Jacobian-vector product at x in place.

source
NLPModels.jprod_lin!Method
Jv = jprod_lin!(nlp, rows, cols, vals, v, Jv)

Evaluate $J(x)v$, the linear Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jprod_nln!Function
Jv = jprod_nln!(nlp, x, v, Jv)

Evaluate $J(x)v$, the nonlinear Jacobian-vector product at x in place.

source
NLPModels.jprod_nln!Method
Jv = jprod_nln!(nlp, rows, cols, vals, v, Jv)

Evaluate $J(x)v$, the nonlinear Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jprod_residual!Function
Jv = jprod_residual!(nls, x, v, Jv)

Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$, storing it in Jv.

source
NLPModels.jprod_residual!Method
Jv = jprod_residual!(nls, rows, cols, vals, v, Jv)

Computes the product of the Jacobian of the residual given by (rows, cols, vals) and a vector, i.e., $J(x)v$, storing it in Jv.

source
NLPModels.jprod_residualMethod
Jv = jprod_residual(nls, x, v)

Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$.

source
NLPModels.jth_hessMethod

Hx = jth_hess(nlp, x, j)

Evaluate the Hessian of j-th constraint at x as a sparse matrix with the same sparsity pattern as the Lagrangian Hessian. A Symmetric object wrapping the lower triangle is returned.

source
NLPModels.jth_hess_coord!Function
vals = jth_hess_coord!(nlp, x, j, vals)

Evaluate the Hessian of j-th constraint at x in sparse coordinate format, with vals of length nlp.meta.nnzh, in place. Only the lower triangle is returned.

source
NLPModels.jth_hess_coordMethod
vals = jth_hess_coord(nlp, x, j)

Evaluate the Hessian of j-th constraint at x in sparse coordinate format. Only the lower triangle is returned.

source
NLPModels.jth_hprod!Function
Hv = jth_hprod!(nlp, x, v, j, Hv)

Evaluate the product of the Hessian of j-th constraint at x with the vector v in place.

source
NLPModels.jth_hprodMethod
Hv = jth_hprod(nlp, x, v, j)

Evaluate the product of the Hessian of j-th constraint at x with the vector v.

source
NLPModels.jtprod!Method
Jtv = jtprod!(nlp, x, v, Jtv)

Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x in place. If the problem has linear and nonlinear constraints, this function allocates.

source
NLPModels.jtprod!Method
Jtv = jtprod!(nlp, rows, cols, vals, v, Jtv)

Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jtprodMethod
Jtv = jtprod(nlp, x, v)

Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x.

source
NLPModels.jtprod_lin!Function
Jtv = jtprod_lin!(nlp, x, v, Jtv)

Evaluate $J(x)^Tv$, the linear transposed-Jacobian-vector product at x in place.

source
NLPModels.jtprod_lin!Method
Jtv = jtprod_lin!(nlp, rows, cols, vals, v, Jtv)

Evaluate $J(x)^Tv$, the linear transposed-Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jtprod_linMethod
Jtv = jtprod_lin(nlp, x, v)

Evaluate $J(x)^Tv$, the linear transposed-Jacobian-vector product at x.

source
NLPModels.jtprod_nln!Function
Jtv = jtprod_nln!(nlp, x, v, Jtv)

Evaluate $J(x)^Tv$, the nonlinear transposed-Jacobian-vector product at x in place.

source
NLPModels.jtprod_nln!Method
Jtv = jtprod_nln!(nlp, rows, cols, vals, v, Jtv)

Evaluate $J(x)^Tv$, the nonlinear transposed-Jacobian-vector product, where the Jacobian is given by (rows, cols, vals) in triplet format.

source
NLPModels.jtprod_nlnMethod
Jtv = jtprod_nln(nlp, x, v)

Evaluate $J(x)^Tv$, the nonlinear transposed-Jacobian-vector product at x.

source
NLPModels.jtprod_residual!Function
Jtv = jtprod_residual!(nls, x, v, Jtv)

Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$, storing it in Jtv.

source
NLPModels.jtprod_residual!Method
Jtv = jtprod_residual!(nls, rows, cols, vals, v, Jtv)

Computes the product of the transpose of the Jacobian of the residual given by (rows, cols, vals) and a vector, i.e., $J(x)^Tv$, storing it in Jv.

source
NLPModels.jtprod_residualMethod
Jtv = jtprod_residual(nls, x, v)

Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$.

source
NLPModels.lines_of_histMethod
lines_of_hist(S, V)

Return a vector of histline(s, v, maxv)s using pairs of s in S and v in V. maxv is given by the maximum of V.

source
NLPModels.nls_metaMethod
nls_meta(nls)

Returns the nls_meta structure of nls. Use this instead of nls.nls_meta to handle models that have internal models.

For basic models nls_meta(nls) is defined as nls.nls_meta, but composite models might not keep nls_meta themselves, so they might specialize it to something like nls.internal.nls_meta.

source
NLPModels.objFunction
f = obj(nlp, x)

Evaluate $f(x)$, the objective function of nlp at x.

source
NLPModels.objMethod
f = obj(nls, x)
f = obj(nls, x, Fx; recompute::Bool=true)

Evaluate f(x), the objective function of nls::AbstractNLSModel. Fx is overwritten with the value of the residual F(x). If recompute is true, then Fx is updated with the residual at x.

source
NLPModels.objcons!Method
f, c = objcons!(nlp, x, c)

Evaluate $f(x)$ and $c(x)$ at x. c is overwritten with the value of $c(x)$.

source
NLPModels.objgrad!Method
f, g = objgrad!(nlp, x, g)

Evaluate $f(x)$ and $∇f(x)$ at x. g is overwritten with the value of $∇f(x)$.

source
NLPModels.objgrad!Method
f, g = objgrad!(nls, x, g)
f, g = objgrad!(nls, x, g, Fx; recompute::Bool=true)

Evaluate f(x) and ∇f(x) of nls::AbstractNLSModel at x. Fx is overwritten with the value of the residual F(x). If recompute is true, then Fx is updated with the residual at x.

source
NLPModels.reset_data!Method
reset_data!(nlp)

Reset model data if appropriate. This method should be overloaded if a subtype of AbstractNLPModel contains data that should be reset, such as a quasi-Newton linear operator.

source
NLPModels.show_headerMethod
show_header(io, nlp)

Show a header for the specific nlp type. Should be imported and defined for every model implementing the NLPModels API.

source
NLPModels.sparsitylineMethod
sparsityline(s, v, maxv)

Return a string of the form

______NAME______: ( 80.00% sparsity)   5

where:

  • ______NAME______ is s with padding to the left and length 16.
  • The sparsity value is given by v / maxv.
  • The number 5 is v.
source