Skip to content

jagot/MatrixPolynomials.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MatrixPolynomials.jl

Stable Dev Build Status Coverage

This package aids in the computation of the action of a matrix polynomial on a vector, i.e. p(A)v, where A is a (square) matrix (or a linear operator) that is supplied to the polynomial p. The matrix polynomial p(A) is never formed explicitly, instead only its action on v is evaluated. This is commonly used in time-stepping algorithms for ordinary differential equations (ODEs) and discretized partial differential equations (PDEs) where p is an approximation of the exponential function (or the related φ functions: φ₀(z) = exp(z), φₖ₊₁ = [φₖ(z)-φₖ(0)]/z, φₖ(0)=1/k!) on the field-of-values of the matrix A, which for the methods in this package needs to be known before-hand.

Alternatives

Other packages with similar goals, but instead based on matrix polynomials found via Krylov iterations are

Krylov iterations do not need to know the field-of-values of the matrix A before-hand, instead, an orthogonal basis is built-up on-the-fly, by repeated action of A on test vectors: Aⁿ*v. This process is however very sensitive to the condition number of A, something that can be alleviated by iterating a shifted and inverted matrix instead: (A-σI)⁻¹ (rational Krylov). Not all matrices/linear operators are easily inverted/factorized, however.

Moreover, the Krylov iterations for general matrices (then called Arnoldi iterations) require long-term recurrences with mutual orthogonalization along with inner products, all of which can be costly to compute. Finally, a subspace approximation of the polynomial p of a upper Hessenberg matrix needs to computed. The real-symmetric/complex-Hermitian case (Lanczos iterations) reduces to three-term recurrences and a tridiagonal subspace matrix. In contrast, the polynomial methods of this packages two-term recurrences only, no orthogonalization (and hence no inner products), and finally no evaluation of the polynomial on a subspace matrix. This could potentially mean that the methods are easier to implement on a GPU.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages