Reference
Contents
Index
RegularizedProblems.RegularizedNLPModelRegularizedProblems.MIT_matrix_completion_modelRegularizedProblems.bpdn_modelRegularizedProblems.fh_modelRegularizedProblems.group_lasso_modelRegularizedProblems.nnmf_modelRegularizedProblems.qp_rand_modelRegularizedProblems.random_matrix_completion_model
RegularizedProblems.RegularizedNLPModel — Typermodel = RegularizedNLPModel(model, regularizer)
rmodel = RegularizedNLSModel(model, regularizer)An aggregate type to represent a regularized optimization model, .i.e., of the form
minimize f(x) + h(x),where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).
The regularized model is made of
model <: AbstractNLPModel: the smooth part of the model, for example aFirstOrderModelh: the nonsmooth part of the model; typically a regularizer defined inProximalOperators.jlselected: the subset of variables to which the regularizer h should be applied (default: all).
This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.
RegularizedProblems.MIT_matrix_completion_model — Methodmodel, nls_model, sol = MIT_matrix_completion_model()A special case of matrix completion problem in which the exact image is a noisy MIT logo.
See the documentation of random_matrix_completion_model() for more information.
RegularizedProblems.bpdn_model — Methodmodel, nls_model, sol = bpdn_model(args...; kwargs...)
model, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)Return an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective
½ ‖Ax - b‖₂²,
where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.
Arguments
m :: Int: the number of rows of An :: Int: the number of columns of A (withn≥m)k :: Int: the number of nonzero elements in x̄noise :: Float64: noise standard deviation σ (default: 0.01).
The second form calls the first form with arguments
m = 200 * compound
n = 512 * compound
k = 10 * compoundKeyword arguments
bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).
Return Value
An instance of an NLPModel and of an NLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.
If bounds == true, the positive part of x̄ is returned.
RegularizedProblems.fh_model — Methodfh_model(; kwargs...)Return an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective
½ ‖F(x)‖₂²,
where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.
Keyword Arguments
All keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.
Return Value
An instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.
RegularizedProblems.group_lasso_model — Methodmodel, nls_model, sol = group_lasso_model(; kwargs...)Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective
½ ‖Ax - b‖₂²,
where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.
Keyword Arguments
m :: Int: the number of rows of A (default: 200)n :: Int: the number of columns of A, withn≥m(default: 512)g :: Int: the number of groups (default: 16)ag :: Int: the number of active groups (default: 5)noise :: Float64: noise amount (default: 0.01)compound :: Int: multiplier form,n,g, andag(default: 1).
Return Value
An instance of an NLPModel that represents the group-lasso problem. An instance of an NLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.
RegularizedProblems.nnmf_model — Functionmodel, nls_model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)Return an instance of an NLPModel and an NLSModel representing the non-negative matrix factorization objective
f(W, H) = ½ ‖A - WH‖₂²,where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).
Arguments
m :: Int: the number of rows of An :: Int: the number of columns of A (withn≥m)k :: Int: the number of clusters
RegularizedProblems.qp_rand_model — Methodmodel = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)Return an instance of a QuadraticModel representing
min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,
with H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.
Arguments
n :: Int: size of the problem (default:100_000).
Keyword arguments
dens :: Real: density ofAwith0 < dens ≤ 1used to generate the quadratic model (default:1.0e-4).convex :: Bool: true to generate positive definiteH(default:false).
Return Value
An instance of a QuadraticModel.
RegularizedProblems.random_matrix_completion_model — Methodmodel, nls_model, sol = random_matrix_completion_model(; kwargs...)Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective
½ ‖P(X - A)‖²
in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.
Keyword Arguments
m :: Int: the number of rows of X and A (default: 100)n :: Int: the number of columns of X and A (default: 100)r :: Int: the desired rank of A (default: 5)sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels
retained by the operator P (default: 0.8)
va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).
Return Value
An instance of an NLPModel and of an NLSModel that represent the same matrix completion problem, and the exact solution.