package scipy

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
val get_py : string -> Py.Object.t

Get an attribute of this module as a Py.Object.t. This is useful to pass a Python function to another function.

module IterInv : sig ... end
module IterOpInv : sig ... end
module LuInv : sig ... end
module ReentrancyLock : sig ... end
module SpLuInv : sig ... end
val aslinearoperator : Py.Object.t -> Py.Object.t

Return A as a LinearOperator.

'A' may be any of the following types:

  • ndarray
  • matrix
  • sparse matrix (e.g. csr_matrix, lil_matrix, etc.)
  • LinearOperator
  • An object with .shape and .matvec attributes

See the LinearOperator documentation for additional information.

Notes ----- If 'A' has no .dtype attribute, the data type is determined by calling :func:`LinearOperator.matvec()` - set the .dtype attribute to prevent this call upon the linear operator creation.

Examples -------- >>> from scipy.sparse.linalg import aslinearoperator >>> M = np.array([1,2,3],[4,5,6], dtype=np.int32) >>> aslinearoperator(M) <2x3 MatrixLinearOperator with dtype=int32>

val choose_ncv : Py.Object.t -> Py.Object.t

Choose number of lanczos vectors based on target number of singular/eigen values and vectors to compute, k.

val eig : ?b:[> `Ndarray ] Np.Obj.t -> ?left:bool -> ?right:bool -> ?overwrite_a:bool -> ?overwrite_b:bool -> ?check_finite:bool -> ?homogeneous_eigvals:bool -> a:[> `Ndarray ] Np.Obj.t -> unit -> Py.Object.t * Py.Object.t * Py.Object.t

Solve an ordinary or generalized eigenvalue problem of a square matrix.

Find eigenvalues w and right or left eigenvectors of a general matrix::

a vr:,i = wi b vr:,i a.H vl:,i = wi.conj() b.H vl:,i

where ``.H`` is the Hermitian conjugation.

Parameters ---------- a : (M, M) array_like A complex or real matrix whose eigenvalues and eigenvectors will be computed. b : (M, M) array_like, optional Right-hand side matrix in a generalized eigenvalue problem. Default is None, identity matrix is assumed. left : bool, optional Whether to calculate and return left eigenvectors. Default is False. right : bool, optional Whether to calculate and return right eigenvectors. Default is True. overwrite_a : bool, optional Whether to overwrite `a`; may improve performance. Default is False. overwrite_b : bool, optional Whether to overwrite `b`; may improve performance. Default is False. check_finite : bool, optional Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs. homogeneous_eigvals : bool, optional If True, return the eigenvalues in homogeneous coordinates. In this case ``w`` is a (2, M) array so that::

w1,i a vr:,i = w0,i b vr:,i

Default is False.

Returns ------- w : (M,) or (2, M) double or complex ndarray The eigenvalues, each repeated according to its multiplicity. The shape is (M,) unless ``homogeneous_eigvals=True``. vl : (M, M) double or complex ndarray The normalized left eigenvector corresponding to the eigenvalue ``wi`` is the column vl:,i. Only returned if ``left=True``. vr : (M, M) double or complex ndarray The normalized right eigenvector corresponding to the eigenvalue ``wi`` is the column ``vr:,i``. Only returned if ``right=True``.

Raises ------ LinAlgError If eigenvalue computation does not converge.

See Also -------- eigvals : eigenvalues of general arrays eigh : Eigenvalues and right eigenvectors for symmetric/Hermitian arrays. eig_banded : eigenvalues and right eigenvectors for symmetric/Hermitian band matrices eigh_tridiagonal : eigenvalues and right eiegenvectors for symmetric/Hermitian tridiagonal matrices

Examples -------- >>> from scipy import linalg >>> a = np.array([0., -1.], [1., 0.]) >>> linalg.eigvals(a) array(0.+1.j, 0.-1.j)

>>> b = np.array([0., 1.], [1., 1.]) >>> linalg.eigvals(a, b) array( 1.+0.j, -1.+0.j)

>>> a = np.array([3., 0., 0.], [0., 8., 0.], [0., 0., 7.]) >>> linalg.eigvals(a, homogeneous_eigvals=True) array([3.+0.j, 8.+0.j, 7.+0.j], [1.+0.j, 1.+0.j, 1.+0.j])

>>> a = np.array([0., -1.], [1., 0.]) >>> linalg.eigvals(a) == linalg.eig(a)0 array( True, True) >>> linalg.eig(a, left=True, right=False)1 # normalized left eigenvector array([-0.70710678+0.j , -0.70710678-0.j ], [-0. +0.70710678j, -0. -0.70710678j]) >>> linalg.eig(a, left=False, right=True)1 # normalized right eigenvector array([0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j])

val eigh : ?b:[> `Ndarray ] Np.Obj.t -> ?lower:bool -> ?eigvals_only:bool -> ?overwrite_a:bool -> ?overwrite_b:bool -> ?turbo:bool -> ?eigvals:Py.Object.t -> ?type_:int -> ?check_finite:bool -> ?subset_by_index:[> `Ndarray ] Np.Obj.t -> ?subset_by_value:[> `Ndarray ] Np.Obj.t -> ?driver:string -> a:[> `Ndarray ] Np.Obj.t -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Solve a standard or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix.

Find eigenvalues array ``w`` and optionally eigenvectors array ``v`` of array ``a``, where ``b`` is positive definite such that for every eigenvalue λ (i-th entry of w) and its eigenvector ``vi`` (i-th column of ``v``) satisfies::

a @ vi = λ * b @ vi vi.conj().T @ a @ vi = λ vi.conj().T @ b @ vi = 1

In the standard problem, ``b`` is assumed to be the identity matrix.

Parameters ---------- a : (M, M) array_like A complex Hermitian or real symmetric matrix whose eigenvalues and eigenvectors will be computed. b : (M, M) array_like, optional A complex Hermitian or real symmetric definite positive matrix in. If omitted, identity matrix is assumed. lower : bool, optional Whether the pertinent array data is taken from the lower or upper triangle of ``a`` and, if applicable, ``b``. (Default: lower) eigvals_only : bool, optional Whether to calculate only eigenvalues and no eigenvectors. (Default: both are calculated) subset_by_index : iterable, optional If provided, this two-element iterable defines the start and the end indices of the desired eigenvalues (ascending order and 0-indexed). To return only the second smallest to fifth smallest eigenvalues, ``1, 4`` is used. ``n-3, n-1`` returns the largest three. Only available with 'evr', 'evx', and 'gvx' drivers. The entries are directly converted to integers via ``int()``. subset_by_value : iterable, optional If provided, this two-element iterable defines the half-open interval ``(a, b]`` that, if any, only the eigenvalues between these values are returned. Only available with 'evr', 'evx', and 'gvx' drivers. Use ``np.inf`` for the unconstrained ends. driver: str, optional Defines which LAPACK driver should be used. Valid options are 'ev', 'evd', 'evr', 'evx' for standard problems and 'gv', 'gvd', 'gvx' for generalized (where b is not None) problems. See the Notes section. type : int, optional For the generalized problems, this keyword specifies the problem type to be solved for ``w`` and ``v`` (only takes 1, 2, 3 as possible inputs)::

1 => a @ v = w @ b @ v 2 => a @ b @ v = w @ v 3 => b @ a @ v = w @ v

This keyword is ignored for standard problems. overwrite_a : bool, optional Whether to overwrite data in ``a`` (may improve performance). Default is False. overwrite_b : bool, optional Whether to overwrite data in ``b`` (may improve performance). Default is False. check_finite : bool, optional Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs. turbo : bool, optional *Deprecated since v1.5.0, use ``driver=gvd`` keyword instead*. Use divide and conquer algorithm (faster but expensive in memory, only for generalized eigenvalue problem and if full set of eigenvalues are requested.). Has no significant effect if eigenvectors are not requested. eigvals : tuple (lo, hi), optional *Deprecated since v1.5.0, use ``subset_by_index`` keyword instead*. Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo <= hi <= M-1. If omitted, all eigenvalues and eigenvectors are returned.

Returns ------- w : (N,) ndarray The N (1<=N<=M) selected eigenvalues, in ascending order, each repeated according to its multiplicity. v : (M, N) ndarray (if ``eigvals_only == False``)

Raises ------ LinAlgError If eigenvalue computation does not converge, an error occurred, or b matrix is not definite positive. Note that if input matrices are not symmetric or Hermitian, no error will be reported but results will be wrong.

See Also -------- eigvalsh : eigenvalues of symmetric or Hermitian arrays eig : eigenvalues and right eigenvectors for non-symmetric arrays eigh_tridiagonal : eigenvalues and right eiegenvectors for symmetric/Hermitian tridiagonal matrices

Notes ----- This function does not check the input array for being hermitian/symmetric in order to allow for representing arrays with only their upper/lower triangular parts. Also, note that even though not taken into account, finiteness check applies to the whole array and unaffected by 'lower' keyword.

This function uses LAPACK drivers for computations in all possible keyword combinations, prefixed with ``sy`` if arrays are real and ``he`` if complex, e.g., a float array with 'evr' driver is solved via 'syevr', complex arrays with 'gvx' driver problem is solved via 'hegvx' etc.

As a brief summary, the slowest and the most robust driver is the classical ``<sy/he>ev`` which uses symmetric QR. ``<sy/he>evr`` is seen as the optimal choice for the most general cases. However, there are certain occassions that ``<sy/he>evd`` computes faster at the expense of more memory usage. ``<sy/he>evx``, while still being faster than ``<sy/he>ev``, often performs worse than the rest except when very few eigenvalues are requested for large arrays though there is still no performance guarantee.

For the generalized problem, normalization with respoect to the given type argument::

type 1 and 3 : v.conj().T @ a @ v = w type 2 : inv(v).conj().T @ a @ inv(v) = w

type 1 or 2 : v.conj().T @ b @ v = I type 3 : v.conj().T @ inv(b) @ v = I

Examples -------- >>> from scipy.linalg import eigh >>> A = np.array([6, 3, 1, 5], [3, 0, 5, 1], [1, 5, 6, 2], [5, 1, 2, 2]) >>> w, v = eigh(A) >>> np.allclose(A @ v - v @ np.diag(w), np.zeros((4, 4))) True

Request only the eigenvalues

>>> w = eigh(A, eigvals_only=True)

Request eigenvalues that are less than 10.

>>> A = np.array([34, -4, -10, -7, 2], ... [-4, 7, 2, 12, 0], ... [-10, 2, 44, 2, -19], ... [-7, 12, 2, 79, -34], ... [2, 0, -19, -34, 29]) >>> eigh(A, eigvals_only=True, subset_by_value=-np.inf, 10) array(6.69199443e-07, 9.11938152e+00)

Request the largest second eigenvalue and its eigenvector

>>> w, v = eigh(A, subset_by_index=1, 1) >>> w array(9.11938152) >>> v.shape # only a single column is returned (5, 1)

val eigs : ?k:int -> ?m:[ `LinearOperator of Py.Object.t | `Arr of [> `ArrayLike ] Np.Obj.t ] -> ?sigma:Py.Object.t -> ?which:[ `LM | `SM | `LR | `SR | `LI | `SI ] -> ?v0:[> `Ndarray ] Np.Obj.t -> ?ncv:int -> ?maxiter:int -> ?tol:float -> ?return_eigenvectors:bool -> ?minv:[ `LinearOperator of Py.Object.t | `Arr of [> `ArrayLike ] Np.Obj.t ] -> ?oPinv:[ `LinearOperator of Py.Object.t | `Arr of [> `ArrayLike ] Np.Obj.t ] -> ?oPpart:Py.Object.t -> a:[ `Arr of [> `ArrayLike ] Np.Obj.t | `LinearOperator of Py.Object.t ] -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Find k eigenvalues and eigenvectors of the square matrix A.

Solves ``A * xi = wi * xi``, the standard eigenvalue problem for wi eigenvalues with corresponding eigenvectors xi.

If M is specified, solves ``A * xi = wi * M * xi``, the generalized eigenvalue problem for wi eigenvalues with corresponding eigenvectors xi

Parameters ---------- A : ndarray, sparse matrix or LinearOperator An array, sparse matrix, or LinearOperator representing the operation ``A * x``, where A is a real or complex square matrix. k : int, optional The number of eigenvalues and eigenvectors desired. `k` must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix. M : ndarray, sparse matrix or LinearOperator, optional An array, sparse matrix, or LinearOperator representing the operation M*x for the generalized eigenvalue problem

A * x = w * M * x.

M must represent a real, symmetric matrix if A is real, and must represent a complex, hermitian matrix if A is complex. For best results, the data type of M should be the same as that of A. Additionally:

If `sigma` is None, M is positive definite

If sigma is specified, M is positive semi-definite

If sigma is None, eigs requires an operator to compute the solution of the linear equation ``M * x = b``. This is done internally via a (sparse) LU decomposition for an explicit matrix M, or via an iterative solver for a general linear operator. Alternatively, the user can supply the matrix or operator Minv, which gives ``x = Minv * b = M^-1 * b``. sigma : real or complex, optional Find eigenvalues near sigma using shift-invert mode. This requires an operator to compute the solution of the linear system ``A - sigma * M * x = b``, where M is the identity matrix if unspecified. This is computed internally via a (sparse) LU decomposition for explicit matrices A & M, or via an iterative solver if either A or M is a general linear operator. Alternatively, the user can supply the matrix or operator OPinv, which gives ``x = OPinv * b = A - sigma * M^-1 * b``. For a real matrix A, shift-invert can either be done in imaginary mode or real mode, specified by the parameter OPpart ('r' or 'i'). Note that when sigma is specified, the keyword 'which' (below) refers to the shifted eigenvalues ``w'i`` where:

If A is real and OPpart == 'r' (default), ``w'i = 1/2 * 1/(w[i]-sigma) + 1/(w[i]-conj(sigma))``.

If A is real and OPpart == 'i', ``w'i = 1/2i * 1/(w[i]-sigma) - 1/(w[i]-conj(sigma))``.

If A is complex, ``w'i = 1/(wi-sigma)``.

v0 : ndarray, optional Starting vector for iteration. Default: random ncv : int, optional The number of Lanczos vectors generated `ncv` must be greater than `k`; it is recommended that ``ncv > 2*k``. Default: ``min(n, max(2*k + 1, 20))`` which : str, 'LM' | 'SM' | 'LR' | 'SR' | 'LI' | 'SI', optional Which `k` eigenvectors and eigenvalues to find:

'LM' : largest magnitude

'SM' : smallest magnitude

'LR' : largest real part

'SR' : smallest real part

'LI' : largest imaginary part

'SI' : smallest imaginary part

When sigma != None, 'which' refers to the shifted eigenvalues w'i (see discussion in 'sigma', above). ARPACK is generally better at finding large values than small values. If small eigenvalues are desired, consider using shift-invert mode for better performance. maxiter : int, optional Maximum number of Arnoldi update iterations allowed Default: ``n*10`` tol : float, optional Relative accuracy for eigenvalues (stopping criterion) The default value of 0 implies machine precision. return_eigenvectors : bool, optional Return eigenvectors (True) in addition to eigenvalues Minv : ndarray, sparse matrix or LinearOperator, optional See notes in M, above. OPinv : ndarray, sparse matrix or LinearOperator, optional See notes in sigma, above. OPpart : 'r' or 'i', optional See notes in sigma, above

Returns ------- w : ndarray Array of k eigenvalues. v : ndarray An array of `k` eigenvectors. ``v:, i`` is the eigenvector corresponding to the eigenvalue wi.

Raises ------ ArpackNoConvergence When the requested convergence is not obtained. The currently converged eigenvalues and eigenvectors can be found as ``eigenvalues`` and ``eigenvectors`` attributes of the exception object.

See Also -------- eigsh : eigenvalues and eigenvectors for symmetric matrix A svds : singular value decomposition for a matrix A

Notes ----- This function is a wrapper to the ARPACK 1_ SNEUPD, DNEUPD, CNEUPD, ZNEUPD, functions which use the Implicitly Restarted Arnoldi Method to find the eigenvalues and eigenvectors 2_.

References ---------- .. 1 ARPACK Software, http://www.caam.rice.edu/software/ARPACK/ .. 2 R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998.

Examples -------- Find 6 eigenvectors of the identity matrix:

>>> from scipy.sparse.linalg import eigs >>> id = np.eye(13) >>> vals, vecs = eigs(id, k=6) >>> vals array( 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j) >>> vecs.shape (13, 6)

val eigsh : ?k:int -> ?m:Py.Object.t -> ?sigma:Py.Object.t -> ?which:Py.Object.t -> ?v0:Py.Object.t -> ?ncv:Py.Object.t -> ?maxiter:Py.Object.t -> ?tol:Py.Object.t -> ?return_eigenvectors:Py.Object.t -> ?minv:Py.Object.t -> ?oPinv:Py.Object.t -> ?mode:Py.Object.t -> a:[ `Arr of [> `ArrayLike ] Np.Obj.t | `LinearOperator of Py.Object.t ] -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Find k eigenvalues and eigenvectors of the real symmetric square matrix or complex hermitian matrix A.

Solves ``A * xi = wi * xi``, the standard eigenvalue problem for wi eigenvalues with corresponding eigenvectors xi.

If M is specified, solves ``A * xi = wi * M * xi``, the generalized eigenvalue problem for wi eigenvalues with corresponding eigenvectors xi.

Parameters ---------- A : ndarray, sparse matrix or LinearOperator A square operator representing the operation ``A * x``, where ``A`` is real symmetric or complex hermitian. For buckling mode (see below) ``A`` must additionally be positive-definite. k : int, optional The number of eigenvalues and eigenvectors desired. `k` must be smaller than N. It is not possible to compute all eigenvectors of a matrix.

Returns ------- w : array Array of k eigenvalues. v : array An array representing the `k` eigenvectors. The column ``v:, i`` is the eigenvector corresponding to the eigenvalue ``wi``.

Other Parameters ---------------- M : An N x N matrix, array, sparse matrix, or linear operator representing the operation ``M @ x`` for the generalized eigenvalue problem

A @ x = w * M @ x.

M must represent a real, symmetric matrix if A is real, and must represent a complex, hermitian matrix if A is complex. For best results, the data type of M should be the same as that of A. Additionally:

If sigma is None, M is symmetric positive definite.

If sigma is specified, M is symmetric positive semi-definite.

In buckling mode, M is symmetric indefinite.

If sigma is None, eigsh requires an operator to compute the solution of the linear equation ``M @ x = b``. This is done internally via a (sparse) LU decomposition for an explicit matrix M, or via an iterative solver for a general linear operator. Alternatively, the user can supply the matrix or operator Minv, which gives ``x = Minv @ b = M^-1 @ b``. sigma : real Find eigenvalues near sigma using shift-invert mode. This requires an operator to compute the solution of the linear system ``A - sigma * M x = b``, where M is the identity matrix if unspecified. This is computed internally via a (sparse) LU decomposition for explicit matrices A & M, or via an iterative solver if either A or M is a general linear operator. Alternatively, the user can supply the matrix or operator OPinv, which gives ``x = OPinv @ b = A - sigma * M^-1 @ b``. Note that when sigma is specified, the keyword 'which' refers to the shifted eigenvalues ``w'i`` where:

if mode == 'normal', ``w'i = 1 / (wi - sigma)``.

if mode == 'cayley', ``w'i = (wi + sigma) / (wi - sigma)``.

if mode == 'buckling', ``w'i = wi / (wi - sigma)``.

(see further discussion in 'mode' below) v0 : ndarray, optional Starting vector for iteration. Default: random ncv : int, optional The number of Lanczos vectors generated ncv must be greater than k and smaller than n; it is recommended that ``ncv > 2*k``. Default: ``min(n, max(2*k + 1, 20))`` which : str 'LM' | 'SM' | 'LA' | 'SA' | 'BE' If A is a complex hermitian matrix, 'BE' is invalid. Which `k` eigenvectors and eigenvalues to find:

'LM' : Largest (in magnitude) eigenvalues.

'SM' : Smallest (in magnitude) eigenvalues.

'LA' : Largest (algebraic) eigenvalues.

'SA' : Smallest (algebraic) eigenvalues.

'BE' : Half (k/2) from each end of the spectrum.

When k is odd, return one more (k/2+1) from the high end. When sigma != None, 'which' refers to the shifted eigenvalues ``w'i`` (see discussion in 'sigma', above). ARPACK is generally better at finding large values than small values. If small eigenvalues are desired, consider using shift-invert mode for better performance. maxiter : int, optional Maximum number of Arnoldi update iterations allowed. Default: ``n*10`` tol : float Relative accuracy for eigenvalues (stopping criterion). The default value of 0 implies machine precision. Minv : N x N matrix, array, sparse matrix, or LinearOperator See notes in M, above. OPinv : N x N matrix, array, sparse matrix, or LinearOperator See notes in sigma, above. return_eigenvectors : bool Return eigenvectors (True) in addition to eigenvalues. This value determines the order in which eigenvalues are sorted. The sort order is also dependent on the `which` variable.

For which = 'LM' or 'SA': If `return_eigenvectors` is True, eigenvalues are sorted by algebraic value.

If `return_eigenvectors` is False, eigenvalues are sorted by absolute value.

For which = 'BE' or 'LA': eigenvalues are always sorted by algebraic value.

For which = 'SM': If `return_eigenvectors` is True, eigenvalues are sorted by algebraic value.

If `return_eigenvectors` is False, eigenvalues are sorted by decreasing absolute value.

mode : string 'normal' | 'buckling' | 'cayley' Specify strategy to use for shift-invert mode. This argument applies only for real-valued A and sigma != None. For shift-invert mode, ARPACK internally solves the eigenvalue problem ``OP * x'i = w'i * B * x'i`` and transforms the resulting Ritz vectors x'i and Ritz values w'i into the desired eigenvectors and eigenvalues of the problem ``A * xi = wi * M * xi``. The modes are as follows:

'normal' : OP = A - sigma * M^-1 @ M, B = M, w'i = 1 / (wi - sigma)

'buckling' : OP = A - sigma * M^-1 @ A, B = A, w'i = wi / (wi - sigma)

'cayley' : OP = A - sigma * M^-1 @ A + sigma * M, B = M, w'i = (wi + sigma) / (wi - sigma)

The choice of mode will affect which eigenvalues are selected by the keyword 'which', and can also impact the stability of convergence (see 2 for a discussion).

Raises ------ ArpackNoConvergence When the requested convergence is not obtained.

The currently converged eigenvalues and eigenvectors can be found as ``eigenvalues`` and ``eigenvectors`` attributes of the exception object.

See Also -------- eigs : eigenvalues and eigenvectors for a general (nonsymmetric) matrix A svds : singular value decomposition for a matrix A

Notes ----- This function is a wrapper to the ARPACK 1_ SSEUPD and DSEUPD functions which use the Implicitly Restarted Lanczos Method to find the eigenvalues and eigenvectors 2_.

References ---------- .. 1 ARPACK Software, http://www.caam.rice.edu/software/ARPACK/ .. 2 R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA, 1998.

Examples -------- >>> from scipy.sparse.linalg import eigsh >>> identity = np.eye(13) >>> eigenvalues, eigenvectors = eigsh(identity, k=6) >>> eigenvalues array(1., 1., 1., 1., 1., 1.) >>> eigenvectors.shape (13, 6)

val eye : ?n:int -> ?k:int -> ?dtype:Np.Dtype.t -> ?format:string -> m:int -> unit -> Py.Object.t

Sparse matrix with ones on diagonal

Returns a sparse (m x n) matrix where the kth diagonal is all ones and everything else is zeros.

Parameters ---------- m : int Number of rows in the matrix. n : int, optional Number of columns. Default: `m`. k : int, optional Diagonal to place ones on. Default: 0 (main diagonal). dtype : dtype, optional Data type of the matrix. format : str, optional Sparse format of the result, e.g., format='csr', etc.

Examples -------- >>> from scipy import sparse >>> sparse.eye(3).toarray() array([ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]) >>> sparse.eye(3, dtype=np.int8) <3x3 sparse matrix of type '<class 'numpy.int8'>' with 3 stored elements (1 diagonals) in DIAgonal format>

val get_OPinv_matvec : ?hermitian:Py.Object.t -> ?tol:Py.Object.t -> a:Py.Object.t -> m:Py.Object.t -> sigma:Py.Object.t -> unit -> Py.Object.t

None

val get_inv_matvec : ?hermitian:Py.Object.t -> ?tol:Py.Object.t -> m:Py.Object.t -> unit -> Py.Object.t

None

val gmres : ?x0:Py.Object.t -> ?tol:Py.Object.t -> ?restart:Py.Object.t -> ?maxiter:Py.Object.t -> ?m:Py.Object.t -> ?callback:Py.Object.t -> ?restrt:Py.Object.t -> ?atol:Py.Object.t -> ?callback_type:Py.Object.t -> a:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `PyObject of Py.Object.t ] -> b:[> `Ndarray ] Np.Obj.t -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * int

Use Generalized Minimal RESidual iteration to solve ``Ax = b``.

Parameters ---------- A : sparse matrix, dense matrix, LinearOperator The real or complex N-by-N matrix of the linear system. Alternatively, ``A`` can be a linear operator which can produce ``Ax`` using, e.g., ``scipy.sparse.linalg.LinearOperator``. b : array, matrix Right hand side of the linear system. Has shape (N,) or (N,1).

Returns ------- x : array, matrix The converged solution. info : int Provides convergence information: * 0 : successful exit * >0 : convergence to tolerance not achieved, number of iterations * <0 : illegal input or breakdown

Other parameters ---------------- x0 : array, matrix Starting guess for the solution (a vector of zeros by default). tol, atol : float, optional Tolerances for convergence, ``norm(residual) <= max(tol*norm(b), atol)``. The default for ``atol`` is ``'legacy'``, which emulates a different legacy behavior.

.. warning::

The default value for `atol` will be changed in a future release. For future compatibility, specify `atol` explicitly. restart : int, optional Number of iterations between restarts. Larger values increase iteration cost, but may be necessary for convergence. Default is 20. maxiter : int, optional Maximum number of iterations (restart cycles). Iteration will stop after maxiter steps even if the specified tolerance has not been achieved. M : sparse matrix, dense matrix, LinearOperator Inverse of the preconditioner of A. M should approximate the inverse of A and be easy to solve for (see Notes). Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance. By default, no preconditioner is used. callback : function User-supplied function to call after each iteration. It is called as `callback(args)`, where `args` are selected by `callback_type`. callback_type : 'x', 'pr_norm', 'legacy', optional Callback function argument requested:

  • ``x``: current iterate (ndarray), called on every restart
  • ``pr_norm``: relative (preconditioned) residual norm (float), called on every inner iteration
  • ``legacy`` (default): same as ``pr_norm``, but also changes the meaning of 'maxiter' to count inner iterations instead of restart cycles. restrt : int, optional DEPRECATED - use `restart` instead.

See Also -------- LinearOperator

Notes ----- A preconditioner, P, is chosen such that P is close to A but easy to solve for. The preconditioner parameter required by this routine is ``M = P^-1``. The inverse should preferably not be calculated explicitly. Rather, use the following template to produce M::

# Construct a linear operator that computes P^-1 * x. import scipy.sparse.linalg as spla M_x = lambda x: spla.spsolve(P, x) M = spla.LinearOperator((n, n), M_x)

Examples -------- >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import gmres >>> A = csc_matrix([3, 2, 0], [1, -1, 0], [0, 5, 1], dtype=float) >>> b = np.array(2, 4, -1, dtype=float) >>> x, exitCode = gmres(A, b) >>> print(exitCode) # 0 indicates successful convergence 0 >>> np.allclose(A.dot(x), b) True

val gmres_loose : a:Py.Object.t -> b:Py.Object.t -> tol:Py.Object.t -> unit -> Py.Object.t

gmres with looser termination condition.

val is_pydata_spmatrix : Py.Object.t -> Py.Object.t

Check whether object is pydata/sparse matrix, avoiding importing the module.

val isdense : Py.Object.t -> Py.Object.t

None

val issparse : Py.Object.t -> Py.Object.t

Is x of a sparse matrix type?

Parameters ---------- x object to check for being a sparse matrix

Returns ------- bool True if x is a sparse matrix, False otherwise

Notes ----- issparse and isspmatrix are aliases for the same function.

Examples -------- >>> from scipy.sparse import csr_matrix, isspmatrix >>> isspmatrix(csr_matrix([5])) True

>>> from scipy.sparse import isspmatrix >>> isspmatrix(5) False

val isspmatrix : Py.Object.t -> Py.Object.t

Is x of a sparse matrix type?

Parameters ---------- x object to check for being a sparse matrix

Returns ------- bool True if x is a sparse matrix, False otherwise

Notes ----- issparse and isspmatrix are aliases for the same function.

Examples -------- >>> from scipy.sparse import csr_matrix, isspmatrix >>> isspmatrix(csr_matrix([5])) True

>>> from scipy.sparse import isspmatrix >>> isspmatrix(5) False

val isspmatrix_csr : Py.Object.t -> Py.Object.t

Is x of csr_matrix type?

Parameters ---------- x object to check for being a csr matrix

Returns ------- bool True if x is a csr matrix, False otherwise

Examples -------- >>> from scipy.sparse import csr_matrix, isspmatrix_csr >>> isspmatrix_csr(csr_matrix([5])) True

>>> from scipy.sparse import csc_matrix, csr_matrix, isspmatrix_csc >>> isspmatrix_csr(csc_matrix([5])) False

val lobpcg : ?b:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `PyObject of Py.Object.t ] -> ?m:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `PyObject of Py.Object.t ] -> ?y:[ `Ndarray of [> `Ndarray ] Np.Obj.t | `PyObject of Py.Object.t ] -> ?tol:[ `F of float | `I of int | `S of string | `Bool of bool ] -> ?maxiter:int -> ?largest:bool -> ?verbosityLevel:int -> ?retLambdaHistory:bool -> ?retResidualNormsHistory:bool -> a:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `PyObject of Py.Object.t ] -> x:[ `Ndarray of [> `Ndarray ] Np.Obj.t | `PyObject of Py.Object.t ] -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * Py.Object.t * Py.Object.t

Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)

LOBPCG is a preconditioned eigensolver for large symmetric positive definite (SPD) generalized eigenproblems.

Parameters ---------- A : sparse matrix, dense matrix, LinearOperator The symmetric linear operator of the problem, usually a sparse matrix. Often called the 'stiffness matrix'. X : ndarray, float32 or float64 Initial approximation to the ``k`` eigenvectors (non-sparse). If `A` has ``shape=(n,n)`` then `X` should have shape ``shape=(n,k)``. B : dense matrix, sparse matrix, LinearOperator, optional The right hand side operator in a generalized eigenproblem. By default, ``B = Identity``. Often called the 'mass matrix'. M : dense matrix, sparse matrix, LinearOperator, optional Preconditioner to `A`; by default ``M = Identity``. `M` should approximate the inverse of `A`. Y : ndarray, float32 or float64, optional n-by-sizeY matrix of constraints (non-sparse), sizeY < n The iterations will be performed in the B-orthogonal complement of the column-space of Y. Y must be full rank. tol : scalar, optional Solver tolerance (stopping criterion). The default is ``tol=n*sqrt(eps)``. maxiter : int, optional Maximum number of iterations. The default is ``maxiter = 20``. largest : bool, optional When True, solve for the largest eigenvalues, otherwise the smallest. verbosityLevel : int, optional Controls solver output. The default is ``verbosityLevel=0``. retLambdaHistory : bool, optional Whether to return eigenvalue history. Default is False. retResidualNormsHistory : bool, optional Whether to return history of residual norms. Default is False.

Returns ------- w : ndarray Array of ``k`` eigenvalues v : ndarray An array of ``k`` eigenvectors. `v` has the same shape as `X`. lambdas : list of ndarray, optional The eigenvalue history, if `retLambdaHistory` is True. rnorms : list of ndarray, optional The history of residual norms, if `retResidualNormsHistory` is True.

Notes ----- If both ``retLambdaHistory`` and ``retResidualNormsHistory`` are True, the return tuple has the following format ``(lambda, V, lambda history, residual norms history)``.

In the following ``n`` denotes the matrix size and ``m`` the number of required eigenvalues (smallest or largest).

The LOBPCG code internally solves eigenproblems of the size ``3m`` on every iteration by calling the 'standard' dense eigensolver, so if ``m`` is not small enough compared to ``n``, it does not make sense to call the LOBPCG code, but rather one should use the 'standard' eigensolver, e.g. numpy or scipy function in this case. If one calls the LOBPCG algorithm for ``5m > n``, it will most likely break internally, so the code tries to call the standard function instead.

It is not that ``n`` should be large for the LOBPCG to work, but rather the ratio ``n / m`` should be large. It you call LOBPCG with ``m=1`` and ``n=10``, it works though ``n`` is small. The method is intended for extremely large ``n / m``, see e.g., reference 28 in https://arxiv.org/abs/0705.2626

The convergence speed depends basically on two factors:

1. How well relatively separated the seeking eigenvalues are from the rest of the eigenvalues. One can try to vary ``m`` to make this better.

2. How well conditioned the problem is. This can be changed by using proper preconditioning. For example, a rod vibration test problem (under tests directory) is ill-conditioned for large ``n``, so convergence will be slow, unless efficient preconditioning is used. For this specific problem, a good simple preconditioner function would be a linear solve for `A`, which is easy to code since A is tridiagonal.

References ---------- .. 1 A. V. Knyazev (2001), Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23, no. 2, pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124

.. 2 A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov (2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc. https://arxiv.org/abs/0705.2626

.. 3 A. V. Knyazev's C and MATLAB implementations: https://bitbucket.org/joseroman/blopex

Examples --------

Solve ``A x = lambda x`` with constraints and preconditioning.

>>> import numpy as np >>> from scipy.sparse import spdiags, issparse >>> from scipy.sparse.linalg import lobpcg, LinearOperator >>> n = 100 >>> vals = np.arange(1, n + 1) >>> A = spdiags(vals, 0, n, n) >>> A.toarray() array([ 1., 0., 0., ..., 0., 0., 0.], [ 0., 2., 0., ..., 0., 0., 0.], [ 0., 0., 3., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 98., 0., 0.], [ 0., 0., 0., ..., 0., 99., 0.], [ 0., 0., 0., ..., 0., 0., 100.])

Constraints:

>>> Y = np.eye(n, 3)

Initial guess for eigenvectors, should have linearly independent columns. Column dimension = number of requested eigenvalues.

>>> X = np.random.rand(n, 3)

Preconditioner in the inverse of A in this example:

>>> invA = spdiags(1./vals, 0, n, n)

The preconditiner must be defined by a function:

>>> def precond( x ): ... return invA @ x

The argument x of the preconditioner function is a matrix inside `lobpcg`, thus the use of matrix-matrix product ``@``.

The preconditioner function is passed to lobpcg as a `LinearOperator`:

>>> M = LinearOperator(matvec=precond, matmat=precond, ... shape=(n, n), dtype=float)

Let us now solve the eigenvalue problem for the matrix A:

>>> eigenvalues, _ = lobpcg(A, X, Y=Y, M=M, largest=False) >>> eigenvalues array(4., 5., 6.)

Note that the vectors passed in Y are the eigenvectors of the 3 smallest eigenvalues. The results returned are orthogonal to those.

val lu_factor : ?overwrite_a:bool -> ?check_finite:bool -> a:[> `Ndarray ] Np.Obj.t -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Compute pivoted LU decomposition of a matrix.

The decomposition is::

A = P L U

where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular.

Parameters ---------- a : (M, M) array_like Matrix to decompose overwrite_a : bool, optional Whether to overwrite data in A (may increase performance) check_finite : bool, optional Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.

Returns ------- lu : (N, N) ndarray Matrix containing U in its upper triangle, and L in its lower triangle. The unit diagonal elements of L are not stored. piv : (N,) ndarray Pivot indices representing the permutation matrix P: row i of matrix was interchanged with row pivi.

See also -------- lu_solve : solve an equation system using the LU factorization of a matrix

Notes ----- This is a wrapper to the ``*GETRF`` routines from LAPACK.

Examples -------- >>> from scipy.linalg import lu_factor >>> A = np.array([2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]) >>> lu, piv = lu_factor(A) >>> piv array(2, 2, 3, 3, dtype=int32)

Convert LAPACK's ``piv`` array to NumPy index and test the permutation

>>> piv_py = 2, 0, 3, 1 >>> L, U = np.tril(lu, k=-1) + np.eye(4), np.triu(lu) >>> np.allclose(Apiv_py - L @ U, np.zeros((4, 4))) True

val lu_solve : ?trans:[ `Two | `One | `Zero ] -> ?overwrite_b:bool -> ?check_finite:bool -> lu_and_piv:Py.Object.t -> b:[> `Ndarray ] Np.Obj.t -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Solve an equation system, a x = b, given the LU factorization of a

Parameters ---------- (lu, piv) Factorization of the coefficient matrix a, as given by lu_factor b : array Right-hand side trans :

, 1, 2

, optional Type of system to solve:

===== ========= trans system ===== ========= 0 a x = b 1 a^T x = b 2 a^H x = b ===== ========= overwrite_b : bool, optional Whether to overwrite data in b (may increase performance) check_finite : bool, optional Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.

Returns ------- x : array Solution to the system

See also -------- lu_factor : LU factorize a matrix

Examples -------- >>> from scipy.linalg import lu_factor, lu_solve >>> A = np.array([2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]) >>> b = np.array(1, 1, 1, 1) >>> lu, piv = lu_factor(A) >>> x = lu_solve((lu, piv), b) >>> np.allclose(A @ x - b, np.zeros((4,))) True

val splu : ?permc_spec:string -> ?diag_pivot_thresh:float -> ?relax:int -> ?panel_size:int -> ?options:Py.Object.t -> a:[> `Spmatrix ] Np.Obj.t -> unit -> Py.Object.t

Compute the LU decomposition of a sparse, square matrix.

Parameters ---------- A : sparse matrix Sparse matrix to factorize. Should be in CSR or CSC format. permc_spec : str, optional How to permute the columns of the matrix for sparsity preservation. (default: 'COLAMD')

  • ``NATURAL``: natural ordering.
  • ``MMD_ATA``: minimum degree ordering on the structure of A^T A.
  • ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A.
  • ``COLAMD``: approximate minimum degree column ordering

diag_pivot_thresh : float, optional Threshold used for a diagonal entry to be an acceptable pivot. See SuperLU user's guide for details 1_ relax : int, optional Expert option for customizing the degree of relaxing supernodes. See SuperLU user's guide for details 1_ panel_size : int, optional Expert option for customizing the panel size. See SuperLU user's guide for details 1_ options : dict, optional Dictionary containing additional expert options to SuperLU. See SuperLU user guide 1_ (section 2.4 on the 'Options' argument) for more details. For example, you can specify ``options=dict(Equil=False, IterRefine='SINGLE'))`` to turn equilibration off and perform a single iterative refinement.

Returns ------- invA : scipy.sparse.linalg.SuperLU Object, which has a ``solve`` method.

See also -------- spilu : incomplete LU decomposition

Notes ----- This function uses the SuperLU library.

References ---------- .. 1 SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/

Examples -------- >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import splu >>> A = csc_matrix([1., 0., 0.], [5., 0., 2.], [0., -1., 0.], dtype=float) >>> B = splu(A) >>> x = np.array(1., 2., 3., dtype=float) >>> B.solve(x) array( 1. , -3. , -1.5) >>> A.dot(B.solve(x)) array( 1., 2., 3.) >>> B.solve(A.dot(x)) array( 1., 2., 3.)

val svds : ?k:int -> ?ncv:int -> ?tol:float -> ?which:[ `LM | `SM ] -> ?v0:[> `Ndarray ] Np.Obj.t -> ?maxiter:int -> ?return_singular_vectors:[ `S of string | `Bool of bool ] -> ?solver:string -> a:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `LinearOperator of Py.Object.t ] -> unit -> [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t * [ `ArrayLike | `Ndarray | `Object ] Np.Obj.t

Compute the largest or smallest k singular values/vectors for a sparse matrix. The order of the singular values is not guaranteed.

Parameters ---------- A : sparse matrix, LinearOperator Array to compute the SVD on, of shape (M, N) k : int, optional Number of singular values and vectors to compute. Must be 1 <= k < min(A.shape). ncv : int, optional The number of Lanczos vectors generated ncv must be greater than k+1 and smaller than n; it is recommended that ncv > 2*k Default: ``min(n, max(2*k + 1, 20))`` tol : float, optional Tolerance for singular values. Zero (default) means machine precision. which : str, 'LM' | 'SM', optional Which `k` singular values to find:

  • 'LM' : largest singular values
  • 'SM' : smallest singular values

.. versionadded:: 0.12.0 v0 : ndarray, optional Starting vector for iteration, of length min(A.shape). Should be an (approximate) left singular vector if N > M and a right singular vector otherwise. Default: random

.. versionadded:: 0.12.0 maxiter : int, optional Maximum number of iterations.

.. versionadded:: 0.12.0 return_singular_vectors : bool or str, optional

  • True: return singular vectors (True) in addition to singular values.

.. versionadded:: 0.12.0

  • 'u': only return the u matrix, without computing vh (if N > M).
  • 'vh': only return the vh matrix, without computing u (if N <= M).

.. versionadded:: 0.16.0 solver : str, optional Eigenvalue solver to use. Should be 'arpack' or 'lobpcg'. Default: 'arpack'

Returns ------- u : ndarray, shape=(M, k) Unitary matrix having left singular vectors as columns. If `return_singular_vectors` is 'vh', this variable is not computed, and None is returned instead. s : ndarray, shape=(k,) The singular values. vt : ndarray, shape=(k, N) Unitary matrix having right singular vectors as rows. If `return_singular_vectors` is 'u', this variable is not computed, and None is returned instead.

Notes ----- This is a naive implementation using ARPACK or LOBPCG as an eigensolver on A.H * A or A * A.H, depending on which one is more efficient.

Examples -------- >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import svds, eigs >>> A = csc_matrix([1, 0, 0], [5, 0, 2], [0, -1, 0], [0, 0, 3], dtype=float) >>> u, s, vt = svds(A, k=2) >>> s array( 2.75193379, 5.6059665 ) >>> np.sqrt(eigs(A.dot(A.T), k=2)0).real array( 5.6059665 , 2.75193379)

OCaml

Innovation. Community. Security.