API

As stated in the Home page, we consider the nonlinear optimization problem in the following format:

\[\begin{aligned} \min \quad & f(x) \\ & c_L \leq c(x) \leq c_U \\ & \ell \leq x \leq u. \end{aligned}\]

To develop an optimization algorithm, we are usually worried not only with $f(x)$ and $c(x)$, but also with their derivatives. Namely,

  • $\nabla f(x)$, the gradient of $f$ at the point $x$;
  • $\nabla^2 f(x)$, the Hessian of $f$ at the point $x$;
  • $J(x) = \nabla c(x)^T$, the Jacobian of $c$ at the point $x$;
  • $\nabla^2 f(x) + \sum_{i=1}^m \lambda_i \nabla^2 c_i(x)$, the Hessian of the Lagrangian function at the point $(x,\lambda)$.

There are many ways to access some of these values, so here is a little reference guide.

Reference guide

The following naming should be easy enough to follow. If not, click on the link and go to the description.

  • ! means inplace;
  • _coord means coordinate format;
  • prod means matrix-vector product;
  • _op means operator (as in LinearOperators.jl).

Feel free to open an issue to suggest other methods that should apply to all NLPModels instances.

FunctionNLPModels function
$f(x)$obj, objgrad, objgrad!, objcons, objcons!
$\nabla f(x)$grad, grad!, objgrad, objgrad!
$\nabla^2 f(x)$hess, hess_op, hess_op!, hess_coord, hess_coord, hess_structure, hess_structure!, hprod, hprod!
$c(x)$cons, cons!, objcons, objcons!
$J(x)$jac, jac_op, jac_op!, jac_coord, jac_coord!, jac_structure, jprod, jprod!, jtprod, jtprod!
$\nabla^2 L(x,y)$hess, hess_op, hess_coord, hess_coord!, hess_structure, hess_structure!, hprod, hprod!, jth_hprod, jth_hprod!, jth_hess, jth_hess_coord, jth_hess_coord!, ghjvprod, ghjvprod!

API for NLSModels

For the Nonlinear Least Squares models, $f(x) = \tfrac{1}{2} \Vert F(x)\Vert^2$, and these models have additional function to access the residual value and its derivatives. Namely,

  • $J_F(x) = \nabla F(x)^T$
  • $\nabla^2 F_i(x)$
Functionfunction
$F(x)$residual, residual!
$J_F(x)$jac_residual, jac_coord_residual, jac_coord_residual!, jac_structure_residual, jprod_residual, jprod_residual!, jtprod_residual, jtprod_residual!, jac_op_residual, jac_op_residual!
$\nabla^2 F_i(x)$hess_residual, hess_coord_residual, hess_coord_residual!, hess_structure_residual, hess_structure_residual!, jth_hess_residual, hprod_residual, hprod_residual!, hess_op_residual, hess_op_residual!

AbstractNLPModel functions

NLPModels.objFunction
f = obj(nlp, x)

Evaluate $f(x)$, the objective function of nlp at x.

source
NLPModels.gradFunction
g = grad(nlp, x)

Evaluate $∇f(x)$, the gradient of the objective function at x.

source
NLPModels.grad!Function
g = grad!(nlp, x, g)

Evaluate $∇f(x)$, the gradient of the objective function at x in place.

source
NLPModels.objgrad!Function
f, g = objgrad!(nlp, x, g)

Evaluate $f(x)$ and $∇f(x)$ at x. g is overwritten with the value of $∇f(x)$.

source
NLPModels.objcons!Function
f = objcons!(nlp, x, c)

Evaluate $f(x)$ and $c(x)$ at x. c is overwritten with the value of $c(x)$.

source
NLPModels.jac_coordFunction
vals = jac_coord(nlp, x)

Evaluate $J(x)$, the constraint's Jacobian at x in sparse coordinate format.

source
NLPModels.jac_coord!Function
vals = jac_coord!(nlp, x, vals)

Evaluate $J(x)$, the constraint's Jacobian at x in sparse coordinate format, rewriting vals.

source
NLPModels.jac_structureFunction
(rows,cols) = jac_structure(nlp)

Return the structure of the constraint's Jacobian in sparse coordinate format.

source
NLPModels.jac_structure!Function
jac_structure!(nlp, rows, cols)

Return the structure of the constraint's Jacobian in sparse coordinate format in place.

source
NLPModels.jacFunction
Jx = jac(nlp, x)

Evaluate $J(x)$, the constraint's Jacobian at x as a sparse matrix.

source
NLPModels.jac_opFunction
J = jac_op(nlp, x)

Return the Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v.

source
NLPModels.jac_op!Function
J = jac_op!(nlp, x, Jv, Jtv)

Return the Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
J = jac_op!(nlp, rows, cols, vals, Jv, Jtv)

Return the Jacobian given by (rows, cols, vals) as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. The values Jv and Jtv are used as preallocated storage for the operations.

source
J = jac_op!(nlp, x, rows, cols, Jv, Jtv)

Return the Jacobian at x as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v or J' * v. (rows, cols) should be the sparsity structure of the Jacobian. The values Jv and Jtv are used as preallocated storage for the operations.

source
NLPModels.jprodFunction
Jv = jprod(nlp, x, v)

Evaluate $J(x)v$, the Jacobian-vector product at x.

source
NLPModels.jprod!Function
Jv = jprod!(nlp, x, v, Jv)

Evaluate $J(x)v$, the Jacobian-vector product at x in place.

source
NLPModels.jtprodFunction
Jtv = jtprod(nlp, x, v, Jtv)

Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x.

source
NLPModels.jtprod!Function
Jtv = jtprod!(nlp, x, v, Jtv)

Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x in place.

source
NLPModels.jth_hprodFunction
Hv = jth_hprod(nlp, x, v, j)

Evaluate the product of the Hessian of j-th constraint at x with the vector v.

source
NLPModels.jth_hprod!Function
Hv = jth_hprod!(nlp, x, v, j, Hv)

Evaluate the product of the Hessian of j-th constraint at x with the vector v in place.

source
NLPModels.jth_hessFunction

Hx = jth_hess(nlp, x, j)

Evaluate the Hessian of j-th constraint at x as a sparse matrix with the same sparsity pattern as the Lagrangian Hessian. Only the lower triangle is returned.

source
NLPModels.jth_hess_coordFunction
vals = jth_hess_coord(nlp, x, j)

Evaluate the Hessian of j-th constraint at x in sparse coordinate format. Only the lower triangle is returned.

source
NLPModels.jth_hess_coord!Function
vals = jth_hess_coord!(nlp, x, j, vals)

Evaluate the Hessian of j-th constraint at x in sparse coordinate format, with vals of length nlp.meta.nnzh, in place. Only the lower triangle is returned.

source
NLPModels.ghjvprodFunction

gHv = ghjvprod(nlp, x, g, v)

Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v.

source
NLPModels.ghjvprod!Function

ghjvprod!(nlp, x, g, v, gHv)

Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v in place.

source
NLPModels.hess_coordFunction
vals = hess_coord(nlp, x; obj_weight=1.0)

Evaluate the objective Hessian at x in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[σ ∇²f(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
vals = hess_coord(nlp, x, y; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
NLPModels.hess_coord!Function
vals = hess_coord!(nlp, x, y, vals; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) in sparse coordinate format, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight , rewriting vals. Only the lower triangle is returned.

source
NLPModels.hess_structureFunction
(rows,cols) = hess_structure(nlp)

Return the structure of the Lagrangian Hessian in sparse coordinate format.

source
NLPModels.hess_structure!Function
hess_structure!(nlp, rows, cols)

Return the structure of the Lagrangian Hessian in sparse coordinate format in place.

source
NLPModels.hessFunction
Hx = hess(nlp, x; obj_weight=1.0)

Evaluate the objective Hessian at x as a sparse matrix, with objective function scaled by obj_weight, i.e.,

\[σ ∇²f(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
Hx = hess(nlp, x, y; obj_weight=1.0)

Evaluate the Lagrangian Hessian at (x,y) as a sparse matrix, with objective function scaled by obj_weight, i.e.,

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight . Only the lower triangle is returned.

source
NLPModels.hess_opFunction
H = hess_op(nlp, x; obj_weight=1.0)

Return the objective Hessian at x with objective function scaled by obj_weight as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
H = hess_op(nlp, x, y; obj_weight=1.0)

Return the Lagrangian Hessian at (x,y) with objective function scaled by obj_weight as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v. The linear operator H represents

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hess_op!Function
H = hess_op!(nlp, x, Hv; obj_weight=1.0)

Return the objective Hessian at x with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
H = hess_op!(nlp, rows, cols, vals, Hv)

Return the Hessian given by (rows, cols, vals) as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
H = hess_op!(nlp, x, rows, cols, Hv; obj_weight=1.0)

Return the objective Hessian at x with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. (rows, cols) should be the sparsity structure of the Hessian. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
H = hess_op!(nlp, x, y, Hv; obj_weight=1.0)

Return the Lagrangian Hessian at (x,y) with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
H = hess_op!(nlp, x, y, rows, cols, Hv; obj_weight=1.0)

Return the Lagrangian Hessian at (x,y) with objective function scaled by obj_weight as a linear operator, and storing the result on Hv. The resulting object may be used as if it were a matrix, e.g., w = H * v. (rows, cols) should be the sparsity structure of the Hessian. The vector Hv is used as preallocated storage for the operation. The linear operator H represents

\[σ ∇²f(x),\]

with σ = obj_weight .

source
NLPModels.hprodFunction
Hv = hprod(nlp, x, v; obj_weight=1.0)

Evaluate the product of the objective Hessian at x with the vector v, with objective function scaled by obj_weight, where the objective Hessian is

\[σ ∇²f(x),\]

with σ = obj_weight .

source
Hv = hprod(nlp, x, y, v; obj_weight=1.0)

Evaluate the product of the Lagrangian Hessian at (x,y) with the vector v, with objective function scaled by obj_weight, where the Lagrangian Hessian is

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.hprod!Function
Hv = hprod!(nlp, x, y, v, Hv; obj_weight=1.0)

Evaluate the product of the Lagrangian Hessian at (x,y) with the vector v in place, with objective function scaled by obj_weight, where the Lagrangian Hessian is

\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]

with σ = obj_weight .

source
NLPModels.reset_data!Function
reset_data!(nlp)

Reset model data if appropriate. This method should be overloaded if a subtype of AbstractNLPModel contains data that should be reset, such as a quasi-Newton linear operator.

source

AbstractNLSModel

NLPModels.jac_coord_residual!Function
vals = jac_coord_residual!(nls, x, vals)

Computes the Jacobian of the residual at x in sparse coordinate format, rewriting vals. rows and cols are not rewritten.

source
NLPModels.jprod_residualFunction
Jv = jprod_residual(nls, x, v)

Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$.

source
NLPModels.jprod_residual!Function
Jv = jprod_residual!(nls, x, v, Jv)

Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$, storing it in Jv.

source
NLPModels.jtprod_residualFunction
Jtv = jtprod_residual(nls, x, v)

Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$.

source
NLPModels.jtprod_residual!Function
Jtv = jtprod_residual!(nls, x, v, Jtv)

Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$, storing it in Jtv.

source
NLPModels.jac_op_residual!Function
Jx = jac_op_residual!(nls, x, Jv, Jtv)

Computes $J(x)$, the Jacobian of the residual at x, in linear operator form. The vectors Jv and Jtv are used as preallocated storage for the operations.

source
Jx = jac_op_residual!(nls, rows, cols, vals, Jv, Jtv)

Computes $J(x)$, the Jacobian of the residual given by (rows, cols, vals), in linear operator form. The vectors Jv and Jtv are used as preallocated storage for the operations.

source
Jx = jac_op_residual!(nls, x, rows, cols, Jv, Jtv)

Computes $J(x)$, the Jacobian of the residual at x, in linear operator form. The vectors Jv and Jtv are used as preallocated storage for the operations. The structure of the Jacobian should be given by (rows, cols).

source
NLPModels.hess_residualFunction
H = hess_residual(nls, x, v)

Computes the linear combination of the Hessians of the residuals at x with coefficients v.

source
NLPModels.hess_coord_residualFunction
vals = hess_coord_residual(nls, x, v)

Computes the linear combination of the Hessians of the residuals at x with coefficients v in sparse coordinate format.

source
NLPModels.hess_coord_residual!Function
vals = hess_coord_residual!(nls, x, v, vals)

Computes the linear combination of the Hessians of the residuals at x with coefficients v in sparse coordinate format, rewriting vals.

source
NLPModels.hprod_residualFunction
Hiv = hprod_residual(nls, x, i, v)

Computes the product of the Hessian of the i-th residual at x, times the vector v.

source
NLPModels.hprod_residual!Function
Hiv = hprod_residual!(nls, x, i, v, Hiv)

Computes the product of the Hessian of the i-th residual at x, times the vector v, and stores it in vector Hiv.

source
NLPModels.hess_op_residual!Function
Hop = hess_op_residual!(nls, x, i, Hiv)

Computes the Hessian of the i-th residual at x, in linear operator form. The vector Hiv is used as preallocated storage for the operation.

source

Internal

NLPModels.coo_prod!Function
coo_prod!(rows, cols, vals, v, Av)

Compute the product of a matrix A given by (rows, cols, vals) and the vector v. The result is stored in Av, which should have length equals to the number of rows of A.

source
NLPModels.coo_sym_prod!Function
coo_sym_prod!(rows, cols, vals, v, Av)

Compute the product of a symmetric matrix A given by (rows, cols, vals) and the vector v. The result is stored in Av, which should have length equals to the number of rows of A. Only one triangle of A should be passed.

source