CGLS
Krylov.cgls — Function(x, stats) = cgls(A, b::AbstractVector{FC};
M=I, ldiv::Bool=false, radius::T=zero(T),
λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
itmax::Int=0, timemax::Float64=Inf,
verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the regularized linear least-squares problem
minimize ‖b - Ax‖₂² + λ‖x‖₂²of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations
(AᴴA + λI) x = Aᴴbbut is more stable.
CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :cgls.
For an in-place variant that reuses memory across solves, see cgls!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizenused for preconditioning;ldiv: define whether the preconditioner usesldiv!ormul!;radius: add the trust-region constraint ‖x‖ ≤radiusifradius > 0. Useful to compute a step in a trust-region method for optimization;λ: regularization parameter;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aSimpleStatsstructure.
References
- M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.
- A. Björck, T. Elfving and Z. Strakos, Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems, SIAM Journal on Matrix Analysis and Applications, 19(3), pp. 720–736, 1998.
Krylov.cgls! — Functionworkspace = cgls!(workspace::CglsWorkspace, A, b; kwargs...)In this call, kwargs are keyword arguments of cgls.
See CglsWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :cgls to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
CGLS-LANCZOS-SHIFT
Krylov.cgls_lanczos_shift — Function(x, stats) = cgls_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
M=I, λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
radius::T=zero(T), itmax::Int=0, verbose::Int=0,
history::Bool=false, callback=workspace->false,
iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the regularized linear least-squares problem
minimize ‖b - Ax‖₂² + λ‖x‖₂²using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations
(AᴴA + λI) x = Aᴴbbut is more stable.
CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :cgls_lanczos_shift.
For an in-place variant that reuses memory across solves, see cgls_lanczos_shift!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm;shifts: a vector of lengthp.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizenused for preconditioning;ldiv: define whether the preconditioner usesldiv!ormul!;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a vector ofpdense vectors, each one of lengthn;stats: statistics collected on the run in aLanczosShiftStatsstructure.
References
- M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.
- A. Björck, T. Elfving and Z. Strakos, Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems, SIAM Journal on Matrix Analysis and Applications, 19(3), pp. 720–736, 1998.
Krylov.cgls_lanczos_shift! — Functionworkspace = cgls_lanczos_shift!(workspace::CglsLanczosShiftWorkspace, A, b, shifts; kwargs...)In this call, kwargs are keyword arguments of cgls_lanczos_shift.
See CglsLanczosShiftWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :cgls_lanczos_shift to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
CRLS
Krylov.crls — Function(x, stats) = crls(A, b::AbstractVector{FC};
M=I, ldiv::Bool=false, radius::T=zero(T),
λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
itmax::Int=0, timemax::Float64=Inf,
verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the linear least-squares problem
minimize ‖b - Ax‖₂² + λ‖x‖₂²of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations
(AᴴA + λI) x = Aᴴb.This implementation recurs the residual r := b - Ax.
CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :crls.
For an in-place variant that reuses memory across solves, see crls!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizenused for preconditioning;ldiv: define whether the preconditioner usesldiv!ormul!;radius: add the trust-region constraint ‖x‖ ≤radiusifradius > 0. Useful to compute a step in a trust-region method for optimization;λ: regularization parameter;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aSimpleStatsstructure.
Reference
- D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
Krylov.crls! — Functionworkspace = crls!(workspace::CrlsWorkspace, A, b; kwargs...)In this call, kwargs are keyword arguments of crls.
See CrlsWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :crls to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
LSLQ
Krylov.lslq — Function(x, stats) = lslq(A, b::AbstractVector{FC};
M=I, N=I, ldiv::Bool=false,
window::Int=5, transfer_to_lsqr::Bool=false,
sqd::Bool=false, λ::T=zero(T),
σ::T=zero(T), etol::T=√eps(T),
utol::T=√eps(T), btol::T=√eps(T),
conlim::T=1/√eps(T), atol::T=√eps(T),
rtol::T=√eps(T), itmax::Int=0,
timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the regularized linear least-squares problem
minimize ‖b - Ax‖₂² + λ²‖x‖₂²of size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations
(AᴴA + λ²I) x = Aᴴbbut is more stable.
If λ > 0, we solve the symmetric and quasi-definite system
[ E A ] [ r ] [ b ]
[ Aᴴ -λ²F ] [ x ] = [ 0 ],where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.
The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).
If λ = 0, we solve the symmetric and indefinite system
[ E A ] [ r ] [ b ]
[ Aᴴ 0 ] [ x ] = [ 0 ].The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹.In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).
Main features
- the solution estimate is updated along orthogonal directions
- the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
- the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
- it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
- if
Ais rank deficient, identify the minimum least-squares solution
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :lslq.
For an in-place variant that reuses memory across solves, see lslq!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizemused for centered preconditioning of the augmented system;N: linear operator that models a Hermitian positive-definite matrix of sizenused for centered preconditioning of the augmented system;ldiv: define whether the preconditioners useldiv!ormul!;window: number of iterations used to accumulate a lower bound on the error;transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;sqd: iftrue, setλ=1for Hermitian quasi-definite systems;λ: regularization parameter;σ: strict lower bound on the smallest positive singular valueσₘᵢₙsuch asσ = (1-10⁻⁷)σₘᵢₙ;etol: stopping tolerance based on the lower bound on the error;utol: stopping tolerance based on the upper bound on the error;btol: stopping tolerance used to detect zero-residual problems;conlim: limit on the estimated condition number ofAbeyond which the solution will be abandoned;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aLSLQStatsstructure.stats.err_lbndsis a vector of lower bounds on the LQ error–-the vector is empty ifwindowis set to zerostats.err_ubnds_lqis a vector of upper bounds on the LQ error–-the vector is empty ifσ == 0is left at zerostats.err_ubnds_cgis a vector of upper bounds on the CG error–-the vector is empty ifσ == 0is left at zerostats.error_with_bndis a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)
Stopping conditions
The iterations stop as soon as one of the following conditions holds true:
- the optimality residual is sufficiently small (
stats.status = "found approximate minimum least-squares solution") in the sense that either- ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
- 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
- an approximate zero-residual solution has been found (
stats.status = "found approximate zero-residual solution") in the sense that either- ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
- 1 + ‖r‖ / ‖b‖ ≤ 1
- the estimated condition number of
Ais too large in the sense that either- 1/cond(A) ≤ 1/conlim (
stats.status = "condition number exceeds tolerance"), or - 1 + 1/cond(A) ≤ 1 (
stats.status = "condition number seems too large for this machine")
- 1/cond(A) ≤ 1/conlim (
- the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
- the upper bound on the CG forward error is less than utol * ‖xᶜ‖
References
- R. Estrin, D. Orban and M. A. Saunders, Euclidean-norm error bounds for SYMMLQ and CG, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 235–253, 2019.
- R. Estrin, D. Orban and M. A. Saunders, LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 254–275, 2019.
Krylov.lslq! — Functionworkspace = lslq!(workspace::LslqWorkspace, A, b; kwargs...)In this call, kwargs are keyword arguments of lslq.
See LslqWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :lslq to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
LSQR
Krylov.lsqr — Function(x, stats) = lsqr(A, b::AbstractVector{FC};
M=I, N=I, ldiv::Bool=false,
window::Int=5, sqd::Bool=false, λ::T=zero(T),
radius::T=zero(T), etol::T=√eps(T),
axtol::T=√eps(T), btol::T=√eps(T),
conlim::T=1/√eps(T), atol::T=zero(T),
rtol::T=zero(T), itmax::Int=0,
timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the regularized linear least-squares problem
minimize ‖b - Ax‖₂² + λ²‖x‖₂²of size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations
(AᴴA + λ²I) x = Aᴴb(and therefore to CGLS) but is more stable.
LSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.
If λ > 0, LSQR solves the symmetric and quasi-definite system
[ E A ] [ r ] [ b ]
[ Aᴴ -λ²F ] [ x ] = [ 0 ],where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.
The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).
If λ = 0, we solve the symmetric and indefinite system
[ E A ] [ r ] [ b ]
[ Aᴴ 0 ] [ x ] = [ 0 ].The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹.In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :lsqr.
For an in-place variant that reuses memory across solves, see lsqr!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizemused for centered preconditioning of the augmented system;N: linear operator that models a Hermitian positive-definite matrix of sizenused for centered preconditioning of the augmented system;ldiv: define whether the preconditioners useldiv!ormul!;window: number of iterations used to accumulate a lower bound on the error;sqd: iftrue, setλ=1for Hermitian quasi-definite systems;λ: regularization parameter;radius: add the trust-region constraint ‖x‖ ≤radiusifradius > 0. Useful to compute a step in a trust-region method for optimization;etol: stopping tolerance based on the lower bound on the error;axtol: tolerance on the backward error;btol: stopping tolerance used to detect zero-residual problems;conlim: limit on the estimated condition number ofAbeyond which the solution will be abandoned;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aSimpleStatsstructure.
Reference
- C. C. Paige and M. A. Saunders, LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982.
Krylov.lsqr! — Functionworkspace = lsqr!(workspace::LsqrWorkspace, A, b; kwargs...)In this call, kwargs are keyword arguments of lsqr.
See LsqrWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :lsqr to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
LSMR
Krylov.lsmr — Function(x, stats) = lsmr(A, b::AbstractVector{FC};
M=I, N=I, ldiv::Bool=false,
window::Int=5, sqd::Bool=false, λ::T=zero(T),
radius::T=zero(T), etol::T=√eps(T),
axtol::T=√eps(T), btol::T=√eps(T),
conlim::T=1/√eps(T), atol::T=zero(T),
rtol::T=zero(T), itmax::Int=0,
timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
Solve the regularized linear least-squares problem
minimize ‖b - Ax‖₂² + λ²‖x‖₂²of size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations
(AᴴA + λ²I) x = Aᴴb(and therefore to CRLS) but is more stable.
LSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.
LSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.
If λ > 0, we solve the symmetric and quasi-definite system
[ E A ] [ r ] [ b ]
[ Aᴴ -λ²F ] [ x ] = [ 0 ],where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.
The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).
If λ = 0, we solve the symmetric and indefinite system
[ E A ] [ r ] [ b ]
[ Aᴴ 0 ] [ x ] = [ 0 ].The system above represents the optimality conditions of
minimize ‖b - Ax‖²_E⁻¹.In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :lsmr.
For an in-place variant that reuses memory across solves, see lsmr!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm.
Keyword arguments
M: linear operator that models a Hermitian positive-definite matrix of sizemused for centered preconditioning of the augmented system;N: linear operator that models a Hermitian positive-definite matrix of sizenused for centered preconditioning of the augmented system;ldiv: define whether the preconditioners useldiv!ormul!;window: number of iterations used to accumulate a lower bound on the error;sqd: iftrue, setλ=1for Hermitian quasi-definite systems;λ: regularization parameter;radius: add the trust-region constraint ‖x‖ ≤radiusifradius > 0. Useful to compute a step in a trust-region method for optimization;etol: stopping tolerance based on the lower bound on the error;axtol: tolerance on the backward error;btol: stopping tolerance used to detect zero-residual problems;conlim: limit on the estimated condition number ofAbeyond which the solution will be abandoned;atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aLsmrStatsstructure.
Reference
- D. C.-L. Fong and M. A. Saunders, LSMR: An Iterative Algorithm for Sparse Least Squares Problems, SIAM Journal on Scientific Computing, 33(5), pp. 2950–2971, 2011.
Krylov.lsmr! — Functionworkspace = lsmr!(workspace::LsmrWorkspace, A, b; kwargs...)In this call, kwargs are keyword arguments of lsmr.
See LsmrWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :lsmr to allocate the workspace, and krylov_solve! to run the Krylov method in-place.
USYMQR
Krylov.usymqr — Function(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
callback=workspace->false, iostream::IO=kstdout)T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.
(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.
USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.
USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. When A is Hermitian and b = c, QMR is equivalent to MINRES. USYMQR is considered as a generalization of MINRES.
It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.
Interface
To easily switch between Krylov methods, use the generic interface krylov_solve with method = :usymqr.
For an in-place variant that reuses memory across solves, see usymqr!.
Input arguments
A: a linear operator that models a matrix of dimensionm × n;b: a vector of lengthm;c: a vector of lengthn.
Optional argument
x0: a vector of lengthnthat represents an initial guess of the solutionx.
Keyword arguments
atol: absolute stopping tolerance based on the residual norm;rtol: relative stopping tolerance based on the residual norm;itmax: the maximum number of iterations. Ifitmax=0, the default number of iterations is set tom+n;timemax: the time limit in seconds;verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed everyverboseiterations;history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;callback: function or functor called ascallback(workspace)that returnstrueif the Krylov method should terminate, andfalseotherwise;iostream: stream to which output is logged.
Output arguments
x: a dense vector of lengthn;stats: statistics collected on the run in aSimpleStatsstructure.
References
- M. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.
- L. Reichel and Q. Ye, A generalized LSQR algorithm, Numerical Linear Algebra with Applications, 15(7), pp. 643–660, 2008.
- A. Buttari, D. Orban, D. Ruiz and D. Titley-Peloquin, A tridiagonalization method for symmetric saddle-point and quasi-definite systems, SIAM Journal on Scientific Computing, 41(5), pp. 409–432, 2019.
- A. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.
Krylov.usymqr! — Functionworkspace = usymqr!(workspace::UsymqrWorkspace, A, b, c; kwargs...)
workspace = usymqr!(workspace::UsymqrWorkspace, A, b, c, x0; kwargs...)In these calls, kwargs are keyword arguments of usymqr.
See UsymqrWorkspace for instructions on how to create the workspace.
For a more generic interface, you can use krylov_workspace with method = :usymqr to allocate the workspace, and krylov_solve! to run the Krylov method in-place.