Reusing factorizations across hyperparameters
Motivation
Most interesting GMRF workflows don't stop at a single evaluation. Bayesian hyperparameter inference (INLA, TMB-style marginal-likelihood optimization, HMC over hyperparameters) and MAP fitting all involve inner loops that repeatedly evaluate the same GMRF at many hyperparameter settings — each time computing quantities like logpdf, var, or a Gaussian approximation of a non-Gaussian posterior.
A fresh GMRF pays the full symbolic Cholesky cost on every construction — fill-reducing reordering, elimination tree, supernode layout — even when the sparsity pattern is identical across iterations and only the numeric values change. For a typical inner loop that touches the prior hundreds or thousands of times, that overhead adds up.
The GMRFWorkspace / WorkspaceGMRF machinery lets you pay the symbolic cost once and reuse it. This tutorial walks through a minimal example.
Setup
We use a Besag spatial model on a 100 × 100 grid (10 000 nodes) — a scale where symbolic factorization is meaningfully expensive but the example still runs quickly. A 5-point stencil graph gives realistic fill-in during Cholesky factorization.
using GaussianMarkovRandomFields
using Distributions: logpdf
using SparseArrays
using Random
using Printf
Random.seed!(42)Random.TaskLocalRNG()Build a 4-neighbor adjacency matrix for an m × n grid.
function grid_adjacency(m::Int, n::Int)
N = m * n
I = Int[]
J = Int[]
for j in 1:n, i in 1:m
idx = (j - 1) * m + i
if i < m
push!(I, idx); push!(J, idx + 1) # vertical neighbor
end
if j < n
push!(I, idx); push!(J, idx + m) # horizontal neighbor
end
end
return sparse([I; J], [J; I], 1.0, N, N)
end
m_grid = 100
W = grid_adjacency(m_grid, m_grid)
N = size(W, 1)
model = BesagModel(W)BesagModel{SparseArrays.SparseMatrixCSC{Float64, Int64}, LinearAlgebra.Diagonal{Float64, Vector{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Val{:gaussian}, LinearSolve.CHOLMODFactorization{Nothing}, Nothing}(sparse([2, 101, 1, 3, 102, 2, 4, 103, 3, 5 … 9996, 9998, 9898, 9997, 9999, 9899, 9998, 10000, 9900, 9999], [1, 1, 2, 2, 2, 3, 3, 3, 4, 4 … 9997, 9997, 9998, 9998, 9998, 9999, 9999, 9999, 10000, 10000], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 … 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], 10000, 10000), 1.0e-5, [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10 … 9991, 9992, 9993, 9994, 9995, 9996, 9997, 9998, 9999, 10000]], [1.0297084173334845 0.0 … 0.0 0.0; 0.0 1.0297084173334845 … 0.0 0.0; … ; 0.0 0.0 … 1.0297084173334845 0.0; 0.0 0.0 … 0.0 1.0297084173334845], sparse([1, 2, 101, 1, 2, 3, 102, 2, 3, 4 … 9997, 9998, 9999, 9899, 9998, 9999, 10000, 9900, 9999, 10000], [1, 1, 1, 2, 2, 2, 2, 3, 3, 3 … 9998, 9998, 9998, 9999, 9999, 9999, 9999, 10000, 10000, 10000], [2.0, -1.0, -1.0, -1.0, 3.0, -1.0, -1.0, -1.0, 3.0, -1.0 … -1.0, 3.0, -1.0, -1.0, -1.0, 3.0, -1.0, -1.0, -1.0, 2.0], 10000, 10000), Val{:gaussian}(), LinearSolve.CHOLMODFactorization{Nothing}(0.0, nothing), nothing)We'll evaluate logpdf at a grid of hyperparameter values, simulating the inner loop of hyperparameter inference.
θ_grid = [(; τ = τ) for τ in range(0.5, 2.0; length = 50)]50-element Vector{@NamedTuple{τ::Float64}}:
(τ = 0.5,)
(τ = 0.5306122448979592,)
(τ = 0.5612244897959183,)
(τ = 0.5918367346938775,)
(τ = 0.6224489795918368,)
(τ = 0.6530612244897959,)
(τ = 0.6836734693877551,)
(τ = 0.7142857142857143,)
(τ = 0.7448979591836735,)
(τ = 0.7755102040816326,)
⋮
(τ = 1.7551020408163265,)
(τ = 1.7857142857142858,)
(τ = 1.816326530612245,)
(τ = 1.846938775510204,)
(τ = 1.8775510204081634,)
(τ = 1.9081632653061225,)
(τ = 1.9387755102040816,)
(τ = 1.969387755102041,)
(τ = 2.0,)Use a point satisfying the Besag sum-to-zero constraint.
z = randn(N)
z .-= sum(z) / NBaseline: fresh GMRF per iteration
The baseline constructs a fresh GMRF for each hyperparameter setting. Each iteration runs the full symbolic + numeric factorization pipeline.
function loop_cold(model, θ_grid, z)
s = 0.0
for θ in θ_grid
gmrf = model(; θ...)
s += logpdf(gmrf, z)
end
return s
end
loop_cold(model, θ_grid[1:2], z) # warmup for compilation
s_cold = loop_cold(model, θ_grid, z)
t_cold = minimum(@elapsed(loop_cold(model, θ_grid, z)) for _ in 1:3)
@printf("Cold path: %.3f s (%d evaluations, best of 3)\n", t_cold, length(θ_grid))Cold path: 0.506 s (50 evaluations, best of 3)Warm path: one workspace, reused
With make_workspace, the symbolic analysis is done once. Each subsequent call to model(ws; θ...) just writes new numeric values into the workspace's precision buffer and re-runs the numeric factorization — reusing the same fill-reducing reordering and supernode structure.
function loop_warm(model, θ_grid, z)
ws = make_workspace(model; θ_grid[1]...)
s = 0.0
for θ in θ_grid
gmrf = model(ws; θ...)
s += logpdf(gmrf, z)
end
return s
end
loop_warm(model, θ_grid[1:2], z) # warmup for compilation
s_warm = loop_warm(model, θ_grid, z)
t_warm = minimum(@elapsed(loop_warm(model, θ_grid, z)) for _ in 1:3)
@printf("Warm path: %.3f s (%d evaluations, best of 3)\n", t_warm, length(θ_grid))Warm path: 0.324 s (50 evaluations, best of 3)Both loops compute the same quantity:
@assert s_cold ≈ s_warmAnd the speedup:
@printf("Speedup: %.2fx\n", t_cold / t_warm)Speedup: 1.56xWhen this matters
The workspace pattern wins whenever:
You touch a GMRF many times with a fixed sparsity pattern — INLA θ optimization, MAP descent, HMC/NUTS over hyperparameters, cross-validation grids, posterior visualization at many θ values.
The symbolic factorization cost is non-negligible relative to numeric — typically true once
ngets into the thousands, especially for models with genuine 2D/3D fill-in rather than trivial tridiagonal structure.
It does not help for one-off GMRF construction or workflows where the sparsity pattern actually changes between iterations (e.g. adaptive mesh refinement).
Parallel inner loops
For multi-threaded inner loops, use make_workspace_pool and the with_workspace RAII helper for a task-safe version of the same pattern. See the Workspaces reference for details — including the CHOLMOD-serializes caveat and the CliqueTreesBackend alternative for actual parallel factorization.
This page was generated using Literate.jl.