Solvers
Fundamentally, all interesting quantities of GMRFs (samples, marginal variances, posterior means, ...) must be computed through sparse linear algebra. GaussianMarkovRandomFields.jl provides a number of so-called solvers which perform the underlying computations in different ways and with different trade-offs. Our goal is to provide sane defaults that "just work" for most users, while still allowing power users to customize the behavior of the solvers. Read further to learn about the available solvers and how to use them.
If you don't have time for the details and you're just looking for a recommendation, skip to Choosing a solver.
If you're interested in the interface of solvers, skip to Solver interface.
Cholesky (CHOLMOD)
CHOLMOD is a state-of-the-art direct solver for sparse linear systems. CholeskySolver
uses CHOLMOD to compute the sparse Cholesky factorization of the precision matrix, which it then leverages to draw samples and compute posterior means. Marginal variances are computed using a user-specified variance strategy, which defaults to a sampling-based approach.
GaussianMarkovRandomFields.CholeskySolverBlueprint
— TypeCholeskySolverBlueprint(; var_strategy = RBMCStrategy(100), perm = nothing)
A blueprint for a direct solver that uses a sparse Cholesky decomposition computed through CHOLMOD.
Keyword arguments
var_strategy::AbstractVarianceStrategy
: Strategy for computing the marginal variances of the GMRF. Defaults toRBMCStrategy(100)
.perm::Union{Nothing,Vector{Int}}
: Permutation / node reordering to use for the Cholesky decomposition. Defaults tonothing
, which means that the solver will compute its own permutation.
Pardiso
Pardiso is a state-of-the-art direct solver for sparse linear systems. Its main benefit over the CholeskySolver
is its potential for better performance on large-dimensional GMRFs, as well as its highly efficient method for marginal variance computation.
To use this solver, you need to set up and load Pardiso.jl. The code for our Pardiso solver will then be loaded automatically through a package extension.
GaussianMarkovRandomFields.PardisoGMRFSolverBlueprint
— TypePardisoGMRFSolverBlueprint
A blueprint for a direct solver that uses Pardiso internally.
Highly efficient, but requires a Pardiso license.
CG
The Conjugate Gradient (CG) method is an iterative method for solving symmetric positive-definite linear systems. CGSolver
uses CG to compute posterior means and draw samples.
GaussianMarkovRandomFields.CGSolverBlueprint
— TypeCGSolverBlueprint(; reltol, abstol, maxiter, preconditioner_strategy,
var_strategy, mean_residual_guess)
A blueprint for a conjugate gradient-based solver.
Keyword arguments
reltol::Real = sqrt(eps(Float64))
: Relative tolerance of CG.abstol::Real = 0.0
: Absolute tolerance of CG.maxiter::Int = 1000
: Maximum number of iterations.preconditioner_strategy::Function
: Maps a GMRF instance to a preconditioner.var_strategy::AbstractVarianceStrategy
: A variance strategy.mean_residual_guess::Union{Nothing,AbstractVector}
: An initial guess for the mean residual.
Variance strategies
Fundamentally, computing marginal variances of GMRFs is not trivial, as it requires computing the diagonal entries of the covariance matrix (which is the inverse of the precision matrix). The Takahashi recursions (sometimes referred to as a sparse (partial) inverse method) are a highly accurate and stable method for computing these variances. TakahashiStrategy
uses the Takahashi recursions to compute marginal variances. PardisoSolver
also uses this algorithm internally, but with a highly optimized implementation.
RBMCStrategy
is a sampling-based approach to computing marginal variances. It is less accurate than the Takahashi recursions, but can be much faster for large GMRFs.
GaussianMarkovRandomFields.TakahashiStrategy
— TypeTakahashiStrategy()
Takahashi recursions [1] for computing the marginal variances of a GMRF. Highly accurate, but computationally expensive. Uses SparseInverseSubset.jl
.
GaussianMarkovRandomFields.RBMCStrategy
— TypeRBMCStrategy(n_samples; rng)
Rao-Blackwellized Monte Carlo estimator of a GMRF's marginal variances based on [3]. Particularly useful in large-scale regimes where Takahashi recursions may be too expensive.
Arguments
n_samples::Int
: Number of samples to draw.
Keyword arguments
rng::Random.AbstractRNG = Random.default_rng()
: Random number generator.
GaussianMarkovRandomFields.BlockRBMCStrategy
— TypeBlockRBMCStrategy(n_samples; rng, enclosure_size)
Block Rao-Blackwellized Monte Carlo estimator of a GMRF's marginal variances based on [3]. Achieves faster convergence than plain RBMC by considering blocks of nodes rather than individual nodes, thus integrating more information about the precision matrix. enclosure_size
specifies the size of these blocks. Larger values lead to faster convergence (in terms of the number of samples) at the cost of increased compute. Thus, one should aim for a sweet spot between sampling costs and block operation costs.
Arguments
n_samples::Int
: Number of samples to draw.
Keyword arguments
rng::Random.AbstractRNG = Random.default_rng()
: Random number generator.enclosure_size::Int = 1
: Size of the blocks.
Choosing a solver
The default solver uses the CholeskySolver
for small to medium-sized GMRFs and switches to the CGSolver
for large GRMFs. Similarly, the default variance strategy is TakahashiStrategy
for small to medium-sized GMRFs and RBMCStrategy
for large GMRFs. See:
GaussianMarkovRandomFields.DefaultSolverBlueprint
— TypeDefaultSolverBlueprint()
Default solver blueprint which switches from Cholesky to CG based on the size of the GMRF.
Indeed, this matches our general recommendations: Direct solvers combined with the Takahashi recursions are highly accurate and stable and should be used whenever possible. However, direct solvers become prohibitively expensive for very large-scale GMRFs, both in terms of compute and memory use. In these regimes, CG with a good preconditioner may still be a viable option.
If you have access to both strong parallel computing resources and a Pardiso license, we recommend the use of the PardisoGMRFSolver
. In particular, Pardiso's Takahashi recursions are highly optimized and much faster than the implementation used for our TakahashiStrategy
.
Solver interface
GaussianMarkovRandomFields.AbstractSolver
— TypeAbstractSolver
An abstract type for a solver, which provides methods to compute the mean, variance, and random samples from a Gaussian Markov Random Field (GMRF).
GaussianMarkovRandomFields.AbstractSolverBlueprint
— TypeAbstractSolverBlueprint
An abstract type for a blueprint to construct a solver. A blueprint contains parameters and settings for a solver which are independent of the concrete GMRF it will be used on.
GaussianMarkovRandomFields.AbstractVarianceStrategy
— TypeAbstractVarianceStrategy
An abstract type for a strategy to compute the variance of a GMRF.
GaussianMarkovRandomFields.gmrf_precision
— Functiongmrf_precision(s::AbstractSolver)
Return the precision map of the GMRF associated with a solver.
GaussianMarkovRandomFields.compute_mean
— Functioncompute_mean(s::AbstractSolver)
Compute the mean of the GMRF associated with a solver.
GaussianMarkovRandomFields.compute_variance
— Functioncompute_variance(s::AbstractSolver)
Compute the marginal variances of the GMRF associated with a solver.
GaussianMarkovRandomFields.compute_rand!
— Functioncompute_rand!(s::AbstractSolver, rng::Random.AbstractRNG, x::AbstractVector)
Generate a random sample from the GMRF associated with a solver.