Workspaces
Workspaces let you reuse a symbolic Cholesky factorization across many precision-matrix updates with a shared sparsity pattern. This is the key optimization for inner loops that evaluate a GMRF at many hyperparameter values — INLA-style hyperparameter optimization, TMB-style marginal-likelihood maximization, HMC/NUTS over hyperparameters, or any MAP loop that repeatedly re-factors the same model with different numeric values.
Why workspaces exist
A fresh GMRF constructed from a sparse precision matrix Q pays the full symbolic Cholesky cost every time: fill-reducing reordering, elimination tree, supernode layout. For a typical n = 10^4 SPDE-discretized Matérn prior, that's milliseconds per call — negligible for one evaluation, but dominant inside an inner loop that touches the prior hundreds or thousands of times with the same sparsity pattern and only different nonzero values.
A workspace holds the symbolic analysis persistently and offers a numeric-only refactorization on each update. The factorization pattern is computed once; subsequent updates just re-run the numeric phase.
using GaussianMarkovRandomFields
model = AR1Model(1000)
# One-time symbolic factorization, stored in the workspace
ws = make_workspace(model; τ = 1.0, ρ = 0.5)
# Inner loop: each call does numeric-only refactorization
for θ in θ_grid
gmrf = model(ws; θ...) # WorkspaceGMRF<: AbstractGMRF
ℓ = logpdf(gmrf, z)
# ...
endSee also the tutorial Reusing factorizations across hyperparameters for a concrete timing comparison.
Core types
GMRFWorkspace— the default workspace; wraps a single sparse precision matrix and aWorkspaceBackend(CHOLMOD or CliqueTrees).WorkspaceGMRF— anAbstractGMRFbacked by aGMRFWorkspace. Owns its ownnzvalsnapshot and a version tag so multiple instances can safely share one workspace (switching between them triggers a numeric refactorization).WorkspacePool— a task-safe pool of workspaces for parallel use across Julia tasks.
Parallelism and pools
WorkspacePool hands out workspaces via Channel-based checkout/checkin, safe across tasks that may migrate threads:
pool = make_workspace_pool(model; size = Threads.nthreads())
Threads.@threads for θ in θ_grid
with_workspace(pool) do ws
gmrf = model(ws; θ...)
# ...
end
endCHOLMOD serializes factorizations
The default CHOLMODBackend uses CHOLMOD, which holds a global lock — concurrent factorizations serialize gracefully but don't run in parallel. For actual parallel numeric factorization, build the pool with CliqueTreesBackend workspaces (pure Julia, no global state).
Extension protocol
Peer packages can provide their own workspace types for models that don't fit GMRFWorkspace's "one sparse precision, one factorization" shape — for example, a model whose joint precision is a block structure across several per-block factorizations rather than a single global factor.
Experimental API (pre v0.5)
The AbstractLatentWorkspace / AbstractLatentWorkspacePool protocol is experimental. Surface may shift until validated against at least two downstream implementations.
Workspace protocol
A peer AbstractLatentWorkspace must support being passed as the second argument to the model's callable form:
struct MyPeerWorkspace <: AbstractLatentWorkspace
# internals
end
function (m::MyLatentModel)(ws::MyPeerWorkspace; kwargs...)
# produce an AbstractGMRF using ws, with numeric refactorization
# instead of fresh symbolic analysis
end
GaussianMarkovRandomFields.make_workspace(m::MyLatentModel; kwargs...) =
MyPeerWorkspace(m; kwargs...)How the workspace detects staleness or manages sharing across multiple GMRFs referencing it is the implementation's own business. GMRFWorkspace uses integer version tags (loaded_version + WorkspaceGMRF.version), but this is not part of the protocol — peer workspaces may use a different mechanism or sidestep sharing entirely.
Pool protocol
A peer AbstractLatentWorkspacePool must implement checkout and checkin. The RAII-style with_workspace method is provided by a generic default on the abstract pool type and does not need to be reimplemented, though subtypes may override it to add logging, metrics, or alternate resource semantics.
API reference
Abstract types
GaussianMarkovRandomFields.AbstractLatentWorkspace Type
AbstractLatentWorkspaceAbstract supertype for workspace objects that persist factorization state across precision-matrix updates for a LatentModel.
Experimental API (pre v0.5)
The AbstractLatentWorkspace protocol is experimental. Surface may shift until validated against at least two downstream implementations.
Protocol
Concrete subtypes must support the (model::LatentModel)(ws; θ...) callable form, returning a valid AbstractGMRF. How the workspace detects staleness or manages sharing across multiple GMRFs referencing it is an implementation detail — e.g. GMRFWorkspace uses integer version tags, but peer workspaces are free to use other mechanisms.
See also: make_workspace, GMRFWorkspace.
GaussianMarkovRandomFields.AbstractLatentWorkspacePool Type
AbstractLatentWorkspacePoolAbstract supertype for task-safe pools of workspaces for use across multiple concurrent Julia tasks.
Experimental API (pre v0.5)
See AbstractLatentWorkspace for stability status.
Protocol
Concrete subtypes must implement checkout and checkin. A generic with_workspace default is provided on top of these; subtypes may override it to add logging, metrics, or alternate resource semantics.
See also: make_workspace_pool, WorkspacePool.
Factory hooks
GaussianMarkovRandomFields.make_workspace Function
make_workspace(m::LatentModel; θ_ref...) -> AbstractLatentWorkspaceConstruct a workspace for evaluating m repeatedly across hyperparameter values. The returned workspace is suitable for use as the second argument to m(ws; θ...).
Default implementation: returns GMRFWorkspace(m; θ_ref...). Downstream LatentModel subtypes may override make_workspace to return a peer workspace type when their factorization structure doesn't fit GMRFWorkspace's "one sparse precision, one factorization" shape.
Experimental API (pre v0.5)
Surface may shift until validated against at least two downstream implementations.
Example
model = AR1Model(100)
ws = make_workspace(model; τ = 1.0, ρ = 0.5)
# loop:
for θ in θ_grid
gmrf = model(ws; θ...)
logpdf(gmrf, z)
endGaussianMarkovRandomFields.make_workspace_pool Function
make_workspace_pool(m::LatentModel; size=Threads.nthreads(), θ_ref...) -> AbstractLatentWorkspacePoolConstruct a task-safe pool of workspaces for parallel evaluation of m.
Default implementation: returns WorkspacePool(m; size=size, θ_ref...). Downstream LatentModel subtypes may override to return a peer pool type.
Experimental API (pre v0.5)
Surface may shift until validated against at least two downstream implementations.
GMRFWorkspace
GaussianMarkovRandomFields.GMRFWorkspace Type
GMRFWorkspace{T, B}A mutable workspace that persists a symbolic Cholesky factorization across precision matrix updates with the same sparsity pattern. This is the key optimization for INLA/TMB/HMC inner loops where the sparsity pattern is fixed but numeric values change with hyperparameters.
The workspace owns an internal Q buffer (a SparseMatrixCSC whose nzval is overwritten on updates) and a factorization backend that reuses its symbolic analysis across numeric refactorizations. The backend also owns the selected inverse cache in whatever format it prefers.
Multiple WorkspaceGMRFs can share the same workspace. A version counter tracks whose data is currently loaded — if a WorkspaceGMRF finds the workspace holds stale data, it transparently reloads and refactorizes.
Fields
Q: Internal precision matrix buffer (mutablenzval, fixed pattern)backend: Factorization backend (e.g.,CHOLMODBackend,CliqueTreesBackend)rhs,solution: Preallocated work vectorsnumeric_valid,selinv_valid,logdet_valid: Lazy invalidation flagslogdet_cache: Cached log-determinantnext_version: Counter for assigning unique versions toWorkspaceGMRFsloaded_version: Version of theWorkspaceGMRFwhose data is currently factorized (0 means data was loaded directly viaupdate_precision!, not by aWorkspaceGMRF)
GaussianMarkovRandomFields.update_precision! Function
update_precision!(ws::GMRFWorkspace, Q_new::SparseMatrixCSC)Update the workspace's precision matrix values from Q_new. The sparsity pattern of Q_new must exactly match the workspace's pattern (same colptr and rowval). Only the numeric values (nzval) are copied.
Invalidates all cached results (factorization, selinv, logdet) and marks the workspace as not owned by any WorkspaceGMRF.
Hot-loop performance
Each call runs an O(nnz) pattern-match check. In inner loops where pattern equality is guaranteed (e.g. pulling values from a fixed-pattern buffer), prefer update_precision_values!, which takes the nzval vector directly and skips both the check and any intermediate SparseMatrixCSC construction.
GaussianMarkovRandomFields.update_precision_values! Function
update_precision_values!(ws::GMRFWorkspace, nzval::AbstractVector)Update the workspace's precision matrix nonzero values directly. nzval must have the same length as ws.Q.nzval.
Invalidates all cached results and marks the workspace as not owned by any WorkspaceGMRF.
GaussianMarkovRandomFields.ensure_numeric! Function
ensure_numeric!(ws::GMRFWorkspace)Ensure the numeric factorization is current. If the precision values have been updated since the last factorization, performs a numeric-only refactorization (reusing the existing symbolic analysis).
sourceGaussianMarkovRandomFields.ensure_selinv! Function
ensure_selinv!(ws::GMRFWorkspace)Ensure the selected inverse is computed and cached in the backend. Triggers numeric factorization if needed.
sourceGaussianMarkovRandomFields.dimension Function
dimension(ws::GMRFWorkspace) -> IntReturn the dimension (number of rows/columns) of the precision matrix.
sourceThe low-level linear-algebra operations workspace_solve, backward_solve, selinv, and selinv_diag are also defined on GMRFWorkspace. See Solvers for documentation of the first three (used via any GMRF) and the source for selinv_diag (a convenience that returns only the diagonal of selinv(ws)).
WorkspaceGMRF
GaussianMarkovRandomFields.WorkspaceGMRF Type
WorkspaceGMRF{T, B, W, C} <: AbstractGMRF{T, SparseMatrixCSC{T, Int}}A GMRF backed by a GMRFWorkspace for high-performance repeated operations, with optional linear equality constraints.
Each WorkspaceGMRF owns a snapshot of its precision matrix (nzval copy) and a version tag. Before any workspace operation, ensure_loaded! checks whether the workspace currently holds this GMRF's data — if not, it reloads the precision values and triggers refactorization. This makes it safe for multiple WorkspaceGMRFs to share the same workspace (at the cost of a refactorization when switching between them).
Fields
mean: The unconstrained mean vector.precision: This GMRF's precision matrix (snapshot, owned by this instance).workspace: Shared factorization engine.constraints:nothingorConstraintInfo{T}with precomputed constraint quantities.version: Workspace version tag — used byensure_loaded!to detect stale state.
GaussianMarkovRandomFields.ConstraintInfo Type
ConstraintInfo{T}Precomputed constraint quantities for a constrained WorkspaceGMRF. Stores the constraint matrix A, vector e, and derived quantities (Ã^T = Q⁻¹A^T, L_c = chol(AÃ^T), constrained mean, log correction).
sourceWorkspacePool
GaussianMarkovRandomFields.WorkspacePool Type
WorkspacePool{T, B}A task-safe pool of GMRFWorkspace instances for use across multiple Julia tasks. Uses a Channel-based checkout/checkin pattern (not threadid()-indexed, since Julia tasks can migrate between threads).
Each workspace in the pool owns its own backend factorization, so checkout/checkin is the right primitive for sharing across concurrent tasks.
Concurrent factorization
Whether tasks can actually factorize in parallel depends on the backend. CHOLMODBackend (the default) goes through CHOLMOD, which holds a global lock — concurrent factorizations serialize gracefully but do not run in parallel. CliqueTreesBackend is pure-Julia and thread-safe, so a pool of CliqueTreesBackend workspaces does parallelize numeric factorization. Build the pool with the desired backend if parallel factorization matters.
Usage
pool = WorkspacePool(Q; size=Threads.nthreads())
# RAII pattern (recommended):
with_workspace(pool) do ws
update_precision!(ws, Q_new)
x = workspace_solve(ws, b)
end
# Manual checkout/checkin:
ws = checkout(pool)
try
# use ws...
finally
checkin(pool, ws)
endGaussianMarkovRandomFields.checkout Function
checkout(pool::WorkspacePool) -> GMRFWorkspaceTake a workspace from the pool. Blocks if all workspaces are currently checked out.
sourceGaussianMarkovRandomFields.checkin Function
checkin(pool::WorkspacePool, ws::GMRFWorkspace)Return a workspace to the pool.
sourceGaussianMarkovRandomFields.with_workspace Function
with_workspace(f, pool::AbstractLatentWorkspacePool)Execute f(ws) with a workspace checked out from the pool. The workspace is guaranteed to be returned to the pool when f completes, even if it throws.
This is the protocol-level default: any AbstractLatentWorkspacePool subtype that implements checkout and checkin inherits this behavior automatically. Subtypes may override to add logging, metrics, or alternate resource semantics.
Example
result = with_workspace(pool) do ws
update_precision!(ws, Q_new)
workspace_solve(ws, b)
endBackends
GaussianMarkovRandomFields.WorkspaceBackend Type
WorkspaceBackendAbstract type for factorization backends used by GMRFWorkspace.
Each backend must implement:
refactorize!(b, Q::Symmetric)— numeric-only refactorization reusing symbolicbackend_solve(b, rhs)— solve Q x = rhscompute_logdet(b)— log determinant of Qcompute_selinv!(b)— compute and cache selected inverse internallyget_selinv(b)— return cached selected inverse (type is backend-specific)get_selinv_diag(b)— return diagonal of Q⁻¹ as Vectorbackend_backward_solve(b, x)— compute L^T \ x for sampling
GaussianMarkovRandomFields.CHOLMODBackend Type
CHOLMODBackend{T} <: WorkspaceBackendCHOLMOD-based factorization backend. Owns a CHOLMOD.Factor whose symbolic factorization (permutation, elimination tree, supernodes) is computed once and reused across numeric refactorizations.
Caches the selected inverse as a SparseMatrixCSC after compute_selinv!.
GaussianMarkovRandomFields.CliqueTreesBackend Type
CliqueTreesBackend{T, I, F} <: WorkspaceBackendPure-Julia sparse Cholesky backend via CliqueTrees.jl. Thread-safe — no global state, so multiple backends can be used concurrently from different tasks.
Stores two ChordalCholesky objects sharing the same symbolic factorization:
factor: holds the numeric Cholesky factor (for solve, logdet, backward solve)selinv_factor: holds the selected inverse aftercompute_selinv!
The selinv result stays cached in selinv_factor until the next refactorize!.
Also stores a lower-triangle buffer (Q_lower) and a precomputed index map from the full symmetric Q's nzval to Q_lower's nzval for zero-scan updates.
See Also
Latent Models — the
LatentModelinterface that factory hooks dispatch on.Automatic Differentiation Reference —
WorkspaceGMRFsupports Zygote rrules forlogpdfandgaussian_approximation, and ForwardDiff through the unconstrained constructor.