Converts a nonlinear least-squares problem with residual $F(x)$ to a nonlinear optimization problem with constraints $F(x) = r$ and objective $\tfrac{1}{2}\|r\|^2$. In other words, converts

\[\begin{aligned} \min_x \quad & \tfrac{1}{2}\|F(x)\|^2 \\ \mathrm{s.t.} \quad & c_L ≤ c(x) ≤ c_U \\ & ℓ ≤ x ≤ u \end{aligned}\]


\[\begin{aligned} \min_{x,r} \quad & \tfrac{1}{2}\|r\|^2 \\ \mathrm{s.t.} \quad & F(x) - r = 0 \\ & c_L ≤ c(x) ≤ c_U \\ & ℓ ≤ x ≤ u \end{aligned}\]

If you rather have the first problem, the nls model already works as an NLPModel of that format.


Converts a nonlinear least-squares problem with residual F(x) to a nonlinear optimization problem with constraints F(x) = r and objective ¹/₂‖r‖².


A feasibility residual model is created from a NLPModel of the form

\[\begin{aligned} \min_x \quad & f(x) \\ \mathrm{s.t.} \quad & c_L ≤ c(x) ≤ c_U \\ & \ell ≤ x ≤ u, \end{aligned}\]

by creating slack variables $s = c(x)$ and defining an NLS problem from the equality constraints. The resulting problem is a bound-constrained nonlinear least-squares problem with residual function NLPModels.$F(x,s) = c(x) - s$:

\[\begin{aligned} \min_{x,s} \quad & \tfrac{1}{2} \|c(x) - s\|^2 \\ \mathrm{s.t.} \quad & \ell ≤ x ≤ u \\ & c_L ≤ s ≤ c_U. \end{aligned}\]

Notice that this problem is an AbstractNLSModel, thus the residual value, Jacobian and Hessian are explicitly defined through the NLS API. The slack variables are created using SlackModel. If $\ell_i = u_i$, no slack variable is created. In particular, if there are only equality constrained of the form $c(x) = 0$, the resulting NLS is simply $\min_x \tfrac{1}{2}\|c(x)\|^2$.


A model whose only inequality constraints are bounds.

Given a model, this type represents a second model in which slack variables are introduced so as to convert linear and nonlinear inequality constraints to equality constraints and bounds. More precisely, if the original model has the form

\[\begin{aligned} \min_x \quad & f(x)\\ \mathrm{s.t.} \quad & c_L ≤ c(x) ≤ c_U,\\ & ℓ ≤ x ≤ u, \end{aligned}\]

the new model appears to the user as

\[\begin{aligned} \min_X \quad & f(X)\\ \mathrm{s.t.} \quad & g(X) = 0,\\ & L ≤ X ≤ U. \end{aligned}\]

The unknowns $X = (x, s)$ contain the original variables and slack variables $s$. The latter are such that the new model has the general form

\[\begin{aligned} \min_x \quad & f(x)\\ \mathrm{s.t.} \quad & c(x) - s = 0,\\ & c_L ≤ s ≤ c_U,\\ & ℓ ≤ x ≤ u. \end{aligned}\]

although no slack variables are introduced for equality constraints.

The slack variables are implicitly ordered as linear and then nonlinear, and as [s(low), s(upp), s(rng)], where low, upp and rng represent the indices of the constraints of the form $c_L ≤ c(x) < ∞$, $-∞ < c(x) ≤ c_U$ and $c_L ≤ c(x) ≤ c_U$, respectively.

DiagonalPSBModel(nlp; d0 = fill!(S(undef, nlp.meta.nvar), 1.0))

Construct a DiagonalPSBModel from another type of nlp, in which the Hessian is approximated via a diagonal PSB quasi-Newton operator. d0 is the initial approximation of the diagonal of the Hessian, and by default a vector of ones. See the DiagonalPSB operator documentation.

relative_columns_indices!(cols, nlp, nj, n, jlow, model_jlow)

The input jlow should be relative indices within nlp.model.meta.nln. Add to cols[nj .+ (1:length(jlow))] the indices of n + jind with jind indices of jlow relative to setdiff(nlp.model.meta.nln, nlp.model.meta.jfix).

This is the 0-allocation version of:

ind = setdiff(nlp.model.meta.nln, nlp.model.meta.jfix)
jlow_nln = get_slack_ind(nlp.model.meta.jlow, ind)
cols[(nj + 1):(nj + lj)] .= (n .+ jlow_nln)