Skip to content

Workspaces

Workspaces let you reuse a symbolic Cholesky factorization across many precision-matrix updates with a shared sparsity pattern. This is the key optimization for inner loops that evaluate a GMRF at many hyperparameter values — INLA-style hyperparameter optimization, TMB-style marginal-likelihood maximization, HMC/NUTS over hyperparameters, or any MAP loop that repeatedly re-factors the same model with different numeric values.

Why workspaces exist

A fresh GMRF constructed from a sparse precision matrix Q pays the full symbolic Cholesky cost every time: fill-reducing reordering, elimination tree, supernode layout. For a typical n = 10^4 SPDE-discretized Matérn prior, that's milliseconds per call — negligible for one evaluation, but dominant inside an inner loop that touches the prior hundreds or thousands of times with the same sparsity pattern and only different nonzero values.

A workspace holds the symbolic analysis persistently and offers a numeric-only refactorization on each update. The factorization pattern is computed once; subsequent updates just re-run the numeric phase.

julia
using GaussianMarkovRandomFields

model = AR1Model(1000)

# One-time symbolic factorization, stored in the workspace
ws = make_workspace(model; τ = 1.0, ρ = 0.5)

# Inner loop: each call does numeric-only refactorization
for θ in θ_grid
    gmrf = model(ws; θ...)   # WorkspaceGMRF<: AbstractGMRF
= logpdf(gmrf, z)
    # ...
end

See also the tutorial Reusing factorizations across hyperparameters for a concrete timing comparison.

Core types

  • GMRFWorkspace — the default workspace; wraps a single sparse precision matrix and a WorkspaceBackend (CHOLMOD or CliqueTrees).

  • WorkspaceGMRF — an AbstractGMRF backed by a GMRFWorkspace. Owns its own nzval snapshot and a version tag so multiple instances can safely share one workspace (switching between them triggers a numeric refactorization).

  • WorkspacePool — a task-safe pool of workspaces for parallel use across Julia tasks.

Parallelism and pools

WorkspacePool hands out workspaces via Channel-based checkout/checkin, safe across tasks that may migrate threads:

julia
pool = make_workspace_pool(model; size = Threads.nthreads())

Threads.@threads for θ in θ_grid
    with_workspace(pool) do ws
        gmrf = model(ws; θ...)
        # ...
    end
end

CHOLMOD serializes factorizations

The default CHOLMODBackend uses CHOLMOD, which holds a global lock — concurrent factorizations serialize gracefully but don't run in parallel. For actual parallel numeric factorization, build the pool with CliqueTreesBackend workspaces (pure Julia, no global state).

Extension protocol

Peer packages can provide their own workspace types for models that don't fit GMRFWorkspace's "one sparse precision, one factorization" shape — for example, a model whose joint precision is a block structure across several per-block factorizations rather than a single global factor.

Experimental API (pre v0.5)

The AbstractLatentWorkspace / AbstractLatentWorkspacePool protocol is experimental. Surface may shift until validated against at least two downstream implementations.

Workspace protocol

A peer AbstractLatentWorkspace must support being passed as the second argument to the model's callable form:

julia
struct MyPeerWorkspace <: AbstractLatentWorkspace
    # internals
end

function (m::MyLatentModel)(ws::MyPeerWorkspace; kwargs...)
    # produce an AbstractGMRF using ws, with numeric refactorization
    # instead of fresh symbolic analysis
end

GaussianMarkovRandomFields.make_workspace(m::MyLatentModel; kwargs...) =
    MyPeerWorkspace(m; kwargs...)

How the workspace detects staleness or manages sharing across multiple GMRFs referencing it is the implementation's own business. GMRFWorkspace uses integer version tags (loaded_version + WorkspaceGMRF.version), but this is not part of the protocol — peer workspaces may use a different mechanism or sidestep sharing entirely.

Pool protocol

A peer AbstractLatentWorkspacePool must implement checkout and checkin. The RAII-style with_workspace method is provided by a generic default on the abstract pool type and does not need to be reimplemented, though subtypes may override it to add logging, metrics, or alternate resource semantics.

API reference

Abstract types

GaussianMarkovRandomFields.AbstractLatentWorkspace Type
julia
AbstractLatentWorkspace

Abstract supertype for workspace objects that persist factorization state across precision-matrix updates for a LatentModel.

Experimental API (pre v0.5)

The AbstractLatentWorkspace protocol is experimental. Surface may shift until validated against at least two downstream implementations.

Protocol

Concrete subtypes must support the (model::LatentModel)(ws; θ...) callable form, returning a valid AbstractGMRF. How the workspace detects staleness or manages sharing across multiple GMRFs referencing it is an implementation detail — e.g. GMRFWorkspace uses integer version tags, but peer workspaces are free to use other mechanisms.

See also: make_workspace, GMRFWorkspace.

source
GaussianMarkovRandomFields.AbstractLatentWorkspacePool Type
julia
AbstractLatentWorkspacePool

Abstract supertype for task-safe pools of workspaces for use across multiple concurrent Julia tasks.

Experimental API (pre v0.5)

See AbstractLatentWorkspace for stability status.

Protocol

Concrete subtypes must implement checkout and checkin. A generic with_workspace default is provided on top of these; subtypes may override it to add logging, metrics, or alternate resource semantics.

See also: make_workspace_pool, WorkspacePool.

source

Factory hooks

GaussianMarkovRandomFields.make_workspace Function
julia
make_workspace(m::LatentModel; θ_ref...) -> AbstractLatentWorkspace

Construct a workspace for evaluating m repeatedly across hyperparameter values. The returned workspace is suitable for use as the second argument to m(ws; θ...).

Default implementation: returns GMRFWorkspace(m; θ_ref...). Downstream LatentModel subtypes may override make_workspace to return a peer workspace type when their factorization structure doesn't fit GMRFWorkspace's "one sparse precision, one factorization" shape.

Experimental API (pre v0.5)

Surface may shift until validated against at least two downstream implementations.

Example

julia
model = AR1Model(100)
ws = make_workspace(model; τ = 1.0, ρ = 0.5)
# loop:
for θ in θ_grid
    gmrf = model(ws; θ...)
    logpdf(gmrf, z)
end
source
GaussianMarkovRandomFields.make_workspace_pool Function
julia
make_workspace_pool(m::LatentModel; size=Threads.nthreads(), θ_ref...) -> AbstractLatentWorkspacePool

Construct a task-safe pool of workspaces for parallel evaluation of m.

Default implementation: returns WorkspacePool(m; size=size, θ_ref...). Downstream LatentModel subtypes may override to return a peer pool type.

Experimental API (pre v0.5)

Surface may shift until validated against at least two downstream implementations.

source

GMRFWorkspace

GaussianMarkovRandomFields.GMRFWorkspace Type
julia
GMRFWorkspace{T, B}

A mutable workspace that persists a symbolic Cholesky factorization across precision matrix updates with the same sparsity pattern. This is the key optimization for INLA/TMB/HMC inner loops where the sparsity pattern is fixed but numeric values change with hyperparameters.

The workspace owns an internal Q buffer (a SparseMatrixCSC whose nzval is overwritten on updates) and a factorization backend that reuses its symbolic analysis across numeric refactorizations. The backend also owns the selected inverse cache in whatever format it prefers.

Multiple WorkspaceGMRFs can share the same workspace. A version counter tracks whose data is currently loaded — if a WorkspaceGMRF finds the workspace holds stale data, it transparently reloads and refactorizes.

Fields

  • Q: Internal precision matrix buffer (mutable nzval, fixed pattern)

  • backend: Factorization backend (e.g., CHOLMODBackend, CliqueTreesBackend)

  • rhs, solution: Preallocated work vectors

  • numeric_valid, selinv_valid, logdet_valid: Lazy invalidation flags

  • logdet_cache: Cached log-determinant

  • next_version: Counter for assigning unique versions to WorkspaceGMRFs

  • loaded_version: Version of the WorkspaceGMRF whose data is currently factorized (0 means data was loaded directly via update_precision!, not by a WorkspaceGMRF)

source
GaussianMarkovRandomFields.update_precision! Function
julia
update_precision!(ws::GMRFWorkspace, Q_new::SparseMatrixCSC)

Update the workspace's precision matrix values from Q_new. The sparsity pattern of Q_new must exactly match the workspace's pattern (same colptr and rowval). Only the numeric values (nzval) are copied.

Invalidates all cached results (factorization, selinv, logdet) and marks the workspace as not owned by any WorkspaceGMRF.

Hot-loop performance

Each call runs an O(nnz) pattern-match check. In inner loops where pattern equality is guaranteed (e.g. pulling values from a fixed-pattern buffer), prefer update_precision_values!, which takes the nzval vector directly and skips both the check and any intermediate SparseMatrixCSC construction.

source
GaussianMarkovRandomFields.update_precision_values! Function
julia
update_precision_values!(ws::GMRFWorkspace, nzval::AbstractVector)

Update the workspace's precision matrix nonzero values directly. nzval must have the same length as ws.Q.nzval.

Invalidates all cached results and marks the workspace as not owned by any WorkspaceGMRF.

source
GaussianMarkovRandomFields.ensure_numeric! Function
julia
ensure_numeric!(ws::GMRFWorkspace)

Ensure the numeric factorization is current. If the precision values have been updated since the last factorization, performs a numeric-only refactorization (reusing the existing symbolic analysis).

source
GaussianMarkovRandomFields.ensure_selinv! Function
julia
ensure_selinv!(ws::GMRFWorkspace)

Ensure the selected inverse is computed and cached in the backend. Triggers numeric factorization if needed.

source
GaussianMarkovRandomFields.dimension Function
julia
dimension(ws::GMRFWorkspace) -> Int

Return the dimension (number of rows/columns) of the precision matrix.

source

The low-level linear-algebra operations workspace_solve, backward_solve, selinv, and selinv_diag are also defined on GMRFWorkspace. See Solvers for documentation of the first three (used via any GMRF) and the source for selinv_diag (a convenience that returns only the diagonal of selinv(ws)).

WorkspaceGMRF

GaussianMarkovRandomFields.WorkspaceGMRF Type
julia
WorkspaceGMRF{T, B, W, C} <: AbstractGMRF{T, SparseMatrixCSC{T, Int}}

A GMRF backed by a GMRFWorkspace for high-performance repeated operations, with optional linear equality constraints.

Each WorkspaceGMRF owns a snapshot of its precision matrix (nzval copy) and a version tag. Before any workspace operation, ensure_loaded! checks whether the workspace currently holds this GMRF's data — if not, it reloads the precision values and triggers refactorization. This makes it safe for multiple WorkspaceGMRFs to share the same workspace (at the cost of a refactorization when switching between them).

Fields

  • mean: The unconstrained mean vector.

  • precision: This GMRF's precision matrix (snapshot, owned by this instance).

  • workspace: Shared factorization engine.

  • constraints: nothing or ConstraintInfo{T} with precomputed constraint quantities.

  • version: Workspace version tag — used by ensure_loaded! to detect stale state.

source
GaussianMarkovRandomFields.ConstraintInfo Type
julia
ConstraintInfo{T}

Precomputed constraint quantities for a constrained WorkspaceGMRF. Stores the constraint matrix A, vector e, and derived quantities (Ã^T = Q⁻¹A^T, L_c = chol(AÃ^T), constrained mean, log correction).

source

WorkspacePool

GaussianMarkovRandomFields.WorkspacePool Type
julia
WorkspacePool{T, B}

A task-safe pool of GMRFWorkspace instances for use across multiple Julia tasks. Uses a Channel-based checkout/checkin pattern (not threadid()-indexed, since Julia tasks can migrate between threads).

Each workspace in the pool owns its own backend factorization, so checkout/checkin is the right primitive for sharing across concurrent tasks.

Concurrent factorization

Whether tasks can actually factorize in parallel depends on the backend. CHOLMODBackend (the default) goes through CHOLMOD, which holds a global lock — concurrent factorizations serialize gracefully but do not run in parallel. CliqueTreesBackend is pure-Julia and thread-safe, so a pool of CliqueTreesBackend workspaces does parallelize numeric factorization. Build the pool with the desired backend if parallel factorization matters.

Usage

julia
pool = WorkspacePool(Q; size=Threads.nthreads())

# RAII pattern (recommended):
with_workspace(pool) do ws
    update_precision!(ws, Q_new)
    x = workspace_solve(ws, b)
end

# Manual checkout/checkin:
ws = checkout(pool)
try
    # use ws...
finally
    checkin(pool, ws)
end
source
GaussianMarkovRandomFields.checkout Function
julia
checkout(pool::WorkspacePool) -> GMRFWorkspace

Take a workspace from the pool. Blocks if all workspaces are currently checked out.

source
GaussianMarkovRandomFields.checkin Function
julia
checkin(pool::WorkspacePool, ws::GMRFWorkspace)

Return a workspace to the pool.

source
GaussianMarkovRandomFields.with_workspace Function
julia
with_workspace(f, pool::AbstractLatentWorkspacePool)

Execute f(ws) with a workspace checked out from the pool. The workspace is guaranteed to be returned to the pool when f completes, even if it throws.

This is the protocol-level default: any AbstractLatentWorkspacePool subtype that implements checkout and checkin inherits this behavior automatically. Subtypes may override to add logging, metrics, or alternate resource semantics.

Example

julia
result = with_workspace(pool) do ws
    update_precision!(ws, Q_new)
    workspace_solve(ws, b)
end
source

Backends

GaussianMarkovRandomFields.WorkspaceBackend Type
julia
WorkspaceBackend

Abstract type for factorization backends used by GMRFWorkspace.

Each backend must implement:

  • refactorize!(b, Q::Symmetric) — numeric-only refactorization reusing symbolic

  • backend_solve(b, rhs) — solve Q x = rhs

  • compute_logdet(b) — log determinant of Q

  • compute_selinv!(b) — compute and cache selected inverse internally

  • get_selinv(b) — return cached selected inverse (type is backend-specific)

  • get_selinv_diag(b) — return diagonal of Q⁻¹ as Vector

  • backend_backward_solve(b, x) — compute L^T \ x for sampling

source
GaussianMarkovRandomFields.CHOLMODBackend Type
julia
CHOLMODBackend{T} <: WorkspaceBackend

CHOLMOD-based factorization backend. Owns a CHOLMOD.Factor whose symbolic factorization (permutation, elimination tree, supernodes) is computed once and reused across numeric refactorizations.

Caches the selected inverse as a SparseMatrixCSC after compute_selinv!.

source
GaussianMarkovRandomFields.CliqueTreesBackend Type
julia
CliqueTreesBackend{T, I, F} <: WorkspaceBackend

Pure-Julia sparse Cholesky backend via CliqueTrees.jl. Thread-safe — no global state, so multiple backends can be used concurrently from different tasks.

Stores two ChordalCholesky objects sharing the same symbolic factorization:

  • factor: holds the numeric Cholesky factor (for solve, logdet, backward solve)

  • selinv_factor: holds the selected inverse after compute_selinv!

The selinv result stays cached in selinv_factor until the next refactorize!.

Also stores a lower-triangle buffer (Q_lower) and a precomputed index map from the full symmetric Q's nzval to Q_lower's nzval for zero-scan updates.

source

See Also