API
As stated in the Home page, we consider the nonlinear optimization problem in the following format:
\[\begin{aligned} \min \quad & f(x) \\ & c_L \leq c(x) \leq c_U \\ & \ell \leq x \leq u. \end{aligned}\]
To develop an optimization algorithm, we are usually worried not only with $f(x)$ and $c(x)$, but also with their derivatives. Namely,
- $\nabla f(x)$, the gradient of $f$ at the point $x$;
- $\nabla^2 f(x)$, the Hessian of $f$ at the point $x$;
- $J(x) = \nabla c(x)^T$, the Jacobian of $c$ at the point $x$;
- $\nabla^2 f(x) + \sum_{i=1}^m \lambda_i \nabla^2 c_i(x)$, the Hessian of the Lagrangian function at the point $(x,\lambda)$.
There are many ways to access some of these values, so here is a little reference guide.
Reference guide
The following naming should be easy enough to follow. If not, click on the link and go to the description.
!
means inplace;_coord
means coordinate format;prod
means matrix-vector product;_op
means operator (as in LinearOperators.jl).
Feel free to open an issue to suggest other methods that should apply to all NLPModels instances.
Function | NLPModels function |
---|---|
$f(x)$ | obj , objgrad , objgrad! , objcons , objcons! |
$\nabla f(x)$ | grad , grad! , objgrad , objgrad! |
$\nabla^2 f(x)$ | hess , hess_op , hess_op! , hess_coord , hess_coord , hess_structure , hess_structure! , hprod , hprod! |
$c(x)$ | cons , cons! , objcons , objcons! |
$J(x)$ | jac , jac_op , jac_op! , jac_coord , jac_coord! , jac_structure , jprod , jprod! , jtprod , jtprod! |
$\nabla^2 L(x,y)$ | hess , hess_op , hess_coord , hess_coord! , hess_structure , hess_structure! , hprod , hprod! , jth_hprod , jth_hprod! , jth_hess , jth_hess_coord , jth_hess_coord! , ghjvprod , ghjvprod! |
API for NLSModels
For the Nonlinear Least Squares models, $f(x) = \tfrac{1}{2} \Vert F(x)\Vert^2$, and these models have additional function to access the residual value and its derivatives. Namely,
- $J_F(x) = \nabla F(x)^T$
- $\nabla^2 F_i(x)$
AbstractNLPModel functions
NLPModels.obj
— Functionf = obj(nlp, x)
Evaluate $f(x)$, the objective function of nlp
at x
.
NLPModels.grad
— Functiong = grad(nlp, x)
Evaluate $∇f(x)$, the gradient of the objective function at x
.
NLPModels.grad!
— Functiong = grad!(nlp, x, g)
Evaluate $∇f(x)$, the gradient of the objective function at x
in place.
NLPModels.objgrad
— Functionf, g = objgrad(nlp, x)
Evaluate $f(x)$ and $∇f(x)$ at x
.
NLPModels.objgrad!
— Functionf, g = objgrad!(nlp, x, g)
Evaluate $f(x)$ and $∇f(x)$ at x
. g
is overwritten with the value of $∇f(x)$.
NLPModels.cons
— Functionc = cons(nlp, x)
Evaluate $c(x)$, the constraints at x
.
NLPModels.cons!
— Functionc = cons!(nlp, x, c)
Evaluate $c(x)$, the constraints at x
in place.
NLPModels.objcons
— Functionf, c = objcons(nlp, x)
Evaluate $f(x)$ and $c(x)$ at x
.
NLPModels.objcons!
— Functionf = objcons!(nlp, x, c)
Evaluate $f(x)$ and $c(x)$ at x
. c
is overwritten with the value of $c(x)$.
NLPModels.jac_coord
— Functionvals = jac_coord(nlp, x)
Evaluate $J(x)$, the constraint's Jacobian at x
in sparse coordinate format.
NLPModels.jac_coord!
— Functionvals = jac_coord!(nlp, x, vals)
Evaluate $J(x)$, the constraint's Jacobian at x
in sparse coordinate format, rewriting vals
.
NLPModels.jac_structure
— Function(rows,cols) = jac_structure(nlp)
Return the structure of the constraint's Jacobian in sparse coordinate format.
NLPModels.jac_structure!
— Functionjac_structure!(nlp, rows, cols)
Return the structure of the constraint's Jacobian in sparse coordinate format in place.
NLPModels.jac
— FunctionJx = jac(nlp, x)
Evaluate $J(x)$, the constraint's Jacobian at x
as a sparse matrix.
NLPModels.jac_op
— FunctionJ = jac_op(nlp, x)
Return the Jacobian at x
as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v
or J' * v
.
NLPModels.jac_op!
— FunctionJ = jac_op!(nlp, x, Jv, Jtv)
Return the Jacobian at x
as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v
or J' * v
. The values Jv
and Jtv
are used as preallocated storage for the operations.
J = jac_op!(nlp, rows, cols, vals, Jv, Jtv)
Return the Jacobian given by (rows, cols, vals)
as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v
or J' * v
. The values Jv
and Jtv
are used as preallocated storage for the operations.
J = jac_op!(nlp, x, rows, cols, Jv, Jtv)
Return the Jacobian at x
as a linear operator. The resulting object may be used as if it were a matrix, e.g., J * v
or J' * v
. (rows, cols)
should be the sparsity structure of the Jacobian. The values Jv
and Jtv
are used as preallocated storage for the operations.
NLPModels.jprod
— FunctionJv = jprod(nlp, x, v)
Evaluate $J(x)v$, the Jacobian-vector product at x
.
NLPModels.jprod!
— FunctionJv = jprod!(nlp, x, v, Jv)
Evaluate $J(x)v$, the Jacobian-vector product at x
in place.
NLPModels.jtprod
— FunctionJtv = jtprod(nlp, x, v, Jtv)
Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x
.
NLPModels.jtprod!
— FunctionJtv = jtprod!(nlp, x, v, Jtv)
Evaluate $J(x)^Tv$, the transposed-Jacobian-vector product at x
in place.
NLPModels.jth_hprod
— FunctionHv = jth_hprod(nlp, x, v, j)
Evaluate the product of the Hessian of j-th constraint at x
with the vector v
.
NLPModels.jth_hprod!
— FunctionHv = jth_hprod!(nlp, x, v, j, Hv)
Evaluate the product of the Hessian of j-th constraint at x
with the vector v
in place.
NLPModels.jth_hess
— FunctionHx = jth_hess(nlp, x, j)
Evaluate the Hessian of j-th constraint at x
as a sparse matrix with the same sparsity pattern as the Lagrangian Hessian. Only the lower triangle is returned.
NLPModels.jth_hess_coord
— Functionvals = jth_hess_coord(nlp, x, j)
Evaluate the Hessian of j-th constraint at x
in sparse coordinate format. Only the lower triangle is returned.
NLPModels.jth_hess_coord!
— Functionvals = jth_hess_coord!(nlp, x, j, vals)
Evaluate the Hessian of j-th constraint at x
in sparse coordinate format, with vals
of length nlp.meta.nnzh
, in place. Only the lower triangle is returned.
NLPModels.ghjvprod
— FunctiongHv = ghjvprod(nlp, x, g, v)
Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v.
NLPModels.ghjvprod!
— Functionghjvprod!(nlp, x, g, v, gHv)
Return the vector whose i-th component is gᵀ ∇²cᵢ(x) v in place.
NLPModels.hess_coord
— Functionvals = hess_coord(nlp, x; obj_weight=1.0)
Evaluate the objective Hessian at x
in sparse coordinate format, with objective function scaled by obj_weight
, i.e.,
\[σ ∇²f(x),\]
with σ = obj_weight
. Only the lower triangle is returned.
vals = hess_coord(nlp, x, y; obj_weight=1.0)
Evaluate the Lagrangian Hessian at (x,y)
in sparse coordinate format, with objective function scaled by obj_weight
, i.e.,
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
. Only the lower triangle is returned.
NLPModels.hess_coord!
— Functionvals = hess_coord!(nlp, x, y, vals; obj_weight=1.0)
Evaluate the Lagrangian Hessian at (x,y)
in sparse coordinate format, with objective function scaled by obj_weight
, i.e.,
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
, rewriting vals
. Only the lower triangle is returned.
NLPModels.hess_structure
— Function(rows,cols) = hess_structure(nlp)
Return the structure of the Lagrangian Hessian in sparse coordinate format.
NLPModels.hess_structure!
— Functionhess_structure!(nlp, rows, cols)
Return the structure of the Lagrangian Hessian in sparse coordinate format in place.
NLPModels.hess
— FunctionHx = hess(nlp, x; obj_weight=1.0)
Evaluate the objective Hessian at x
as a sparse matrix, with objective function scaled by obj_weight
, i.e.,
\[σ ∇²f(x),\]
with σ = obj_weight
. Only the lower triangle is returned.
Hx = hess(nlp, x, y; obj_weight=1.0)
Evaluate the Lagrangian Hessian at (x,y)
as a sparse matrix, with objective function scaled by obj_weight
, i.e.,
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
. Only the lower triangle is returned.
NLPModels.hess_op
— FunctionH = hess_op(nlp, x; obj_weight=1.0)
Return the objective Hessian at x
with objective function scaled by obj_weight
as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v
. The linear operator H represents
\[σ ∇²f(x),\]
with σ = obj_weight
.
H = hess_op(nlp, x, y; obj_weight=1.0)
Return the Lagrangian Hessian at (x,y)
with objective function scaled by obj_weight
as a linear operator. The resulting object may be used as if it were a matrix, e.g., H * v
. The linear operator H represents
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
.
NLPModels.hess_op!
— FunctionH = hess_op!(nlp, x, Hv; obj_weight=1.0)
Return the objective Hessian at x
with objective function scaled by obj_weight
as a linear operator, and storing the result on Hv
. The resulting object may be used as if it were a matrix, e.g., w = H * v
. The vector Hv
is used as preallocated storage for the operation. The linear operator H represents
\[σ ∇²f(x),\]
with σ = obj_weight
.
H = hess_op!(nlp, rows, cols, vals, Hv)
Return the Hessian given by (rows, cols, vals)
as a linear operator, and storing the result on Hv
. The resulting object may be used as if it were a matrix, e.g., w = H * v
. The vector Hv
is used as preallocated storage for the operation. The linear operator H represents
\[σ ∇²f(x),\]
with σ = obj_weight
.
H = hess_op!(nlp, x, rows, cols, Hv; obj_weight=1.0)
Return the objective Hessian at x
with objective function scaled by obj_weight
as a linear operator, and storing the result on Hv
. The resulting object may be used as if it were a matrix, e.g., w = H * v
. (rows, cols)
should be the sparsity structure of the Hessian. The vector Hv
is used as preallocated storage for the operation. The linear operator H represents
\[σ ∇²f(x),\]
with σ = obj_weight
.
H = hess_op!(nlp, x, y, Hv; obj_weight=1.0)
Return the Lagrangian Hessian at (x,y)
with objective function scaled by obj_weight
as a linear operator, and storing the result on Hv
. The resulting object may be used as if it were a matrix, e.g., w = H * v
. The vector Hv
is used as preallocated storage for the operation. The linear operator H represents
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
.
H = hess_op!(nlp, x, y, rows, cols, Hv; obj_weight=1.0)
Return the Lagrangian Hessian at (x,y)
with objective function scaled by obj_weight
as a linear operator, and storing the result on Hv
. The resulting object may be used as if it were a matrix, e.g., w = H * v
. (rows, cols)
should be the sparsity structure of the Hessian. The vector Hv
is used as preallocated storage for the operation. The linear operator H represents
\[σ ∇²f(x),\]
with σ = obj_weight
.
NLPModels.hprod
— FunctionHv = hprod(nlp, x, v; obj_weight=1.0)
Evaluate the product of the objective Hessian at x
with the vector v
, with objective function scaled by obj_weight
, where the objective Hessian is
\[σ ∇²f(x),\]
with σ = obj_weight
.
Hv = hprod(nlp, x, y, v; obj_weight=1.0)
Evaluate the product of the Lagrangian Hessian at (x,y)
with the vector v
, with objective function scaled by obj_weight
, where the Lagrangian Hessian is
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
.
NLPModels.hprod!
— FunctionHv = hprod!(nlp, x, y, v, Hv; obj_weight=1.0)
Evaluate the product of the Lagrangian Hessian at (x,y)
with the vector v
in place, with objective function scaled by obj_weight
, where the Lagrangian Hessian is
\[∇²L(x,y) = σ ∇²f(x) + \sum_i yᵢ ∇²cᵢ(x),\]
with σ = obj_weight
.
LinearOperators.reset!
— Functionreset!(counters)
Reset evaluation counters
reset!(nlp)
Reset evaluation count in nlp
NLPModels.reset_data!
— Functionreset_data!(nlp)
Reset model data if appropriate. This method should be overloaded if a subtype of AbstractNLPModel
contains data that should be reset, such as a quasi-Newton linear operator.
AbstractNLSModel
NLPModels.residual
— FunctionFx = residual(nls, x)
Computes $F(x)$, the residual at x.
NLPModels.residual!
— FunctionFx = residual!(nls, x, Fx)
Computes $F(x)$, the residual at x.
NLPModels.jac_residual
— FunctionJx = jac_residual(nls, x)
Computes $J(x)$, the Jacobian of the residual at x.
NLPModels.jac_coord_residual
— Function(rows,cols,vals) = jac_coord_residual(nls, x)
Computes the Jacobian of the residual at x
in sparse coordinate format.
NLPModels.jac_coord_residual!
— Functionvals = jac_coord_residual!(nls, x, vals)
Computes the Jacobian of the residual at x
in sparse coordinate format, rewriting vals
. rows
and cols
are not rewritten.
NLPModels.jac_structure_residual
— Function(rows,cols) = jac_structure_residual(nls)
Returns the structure of the constraint's Jacobian in sparse coordinate format.
NLPModels.jac_structure_residual!
— Function(rows,cols) = jac_structure_residual!(nls, rows, cols)
Returns the structure of the constraint's Jacobian in sparse coordinate format in place.
NLPModels.jprod_residual
— FunctionJv = jprod_residual(nls, x, v)
Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$.
NLPModels.jprod_residual!
— FunctionJv = jprod_residual!(nls, x, v, Jv)
Computes the product of the Jacobian of the residual at x and a vector, i.e., $J(x)v$, storing it in Jv
.
NLPModels.jtprod_residual
— FunctionJtv = jtprod_residual(nls, x, v)
Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$.
NLPModels.jtprod_residual!
— FunctionJtv = jtprod_residual!(nls, x, v, Jtv)
Computes the product of the transpose of the Jacobian of the residual at x and a vector, i.e., $J(x)^Tv$, storing it in Jtv
.
NLPModels.jac_op_residual
— FunctionJx = jac_op_residual(nls, x)
Computes $J(x)$, the Jacobian of the residual at x, in linear operator form.
NLPModels.jac_op_residual!
— FunctionJx = jac_op_residual!(nls, x, Jv, Jtv)
Computes $J(x)$, the Jacobian of the residual at x, in linear operator form. The vectors Jv
and Jtv
are used as preallocated storage for the operations.
Jx = jac_op_residual!(nls, rows, cols, vals, Jv, Jtv)
Computes $J(x)$, the Jacobian of the residual given by (rows, cols, vals)
, in linear operator form. The vectors Jv
and Jtv
are used as preallocated storage for the operations.
Jx = jac_op_residual!(nls, x, rows, cols, Jv, Jtv)
Computes $J(x)$, the Jacobian of the residual at x, in linear operator form. The vectors Jv
and Jtv
are used as preallocated storage for the operations. The structure of the Jacobian should be given by (rows, cols)
.
NLPModels.hess_residual
— FunctionH = hess_residual(nls, x, v)
Computes the linear combination of the Hessians of the residuals at x
with coefficients v
.
NLPModels.hess_coord_residual
— Functionvals = hess_coord_residual(nls, x, v)
Computes the linear combination of the Hessians of the residuals at x
with coefficients v
in sparse coordinate format.
NLPModels.hess_coord_residual!
— Functionvals = hess_coord_residual!(nls, x, v, vals)
Computes the linear combination of the Hessians of the residuals at x
with coefficients v
in sparse coordinate format, rewriting vals
.
NLPModels.hess_structure_residual
— Function(rows,cols) = hess_structure_residual(nls)
Returns the structure of the residual Hessian.
NLPModels.hess_structure_residual!
— Functionhess_structure_residual!(nls, rows, cols)
Returns the structure of the residual Hessian in place.
NLPModels.jth_hess_residual
— FunctionHj = jth_hess_residual(nls, x, j)
Computes the Hessian of the j-th residual at x.
NLPModels.hprod_residual
— FunctionHiv = hprod_residual(nls, x, i, v)
Computes the product of the Hessian of the i-th residual at x, times the vector v.
NLPModels.hprod_residual!
— FunctionHiv = hprod_residual!(nls, x, i, v, Hiv)
Computes the product of the Hessian of the i-th residual at x, times the vector v, and stores it in vector Hiv.
NLPModels.hess_op_residual
— FunctionHop = hess_op_residual(nls, x, i)
Computes the Hessian of the i-th residual at x, in linear operator form.
NLPModels.hess_op_residual!
— FunctionHop = hess_op_residual!(nls, x, i, Hiv)
Computes the Hessian of the i-th residual at x, in linear operator form. The vector Hiv
is used as preallocated storage for the operation.
Internal
NLPModels.coo_prod!
— Functioncoo_prod!(rows, cols, vals, v, Av)
Compute the product of a matrix A
given by (rows, cols, vals)
and the vector v
. The result is stored in Av
, which should have length equals to the number of rows of A
.
NLPModels.coo_sym_prod!
— Functioncoo_sym_prod!(rows, cols, vals, v, Av)
Compute the product of a symmetric matrix A
given by (rows, cols, vals)
and the vector v
. The result is stored in Av
, which should have length equals to the number of rows of A
. Only one triangle of A
should be passed.
NLPModels.@default_counters
— Macro@default_counters Model inner
Define functions relating counters of Model
to counters of Model.inner
.
NLPModels.@default_nlscounters
— Macro@default_nlscounters Model inner
Define functions relating NLS counters of Model
to NLS counters of Model.inner
.
NLPModels.increment!
— Functionincrement!(nlp, s)
Increment counter s
of problem nlp
.
NLPModels.decrement!
— Functiondecrement!(nlp, s)
Decrement counter s
of problem nlp
.