Skip to content

Graphical Lasso

The graphical lasso learns a GMRF from data by estimating a sparse precision matrix. Given a data matrix X, the algorithm solves

maxΩ0logdetΩtr(SΩ)λΩ1

where S is the sample covariance and λ is a sparsity-inducing penalty.

The implementation follows the approach of [4], which combines soft-thresholding of the sample covariance with a maximum-determinant positive-definite matrix completion via chordal graph techniques (using CliqueTrees.jl).

Restricted Graphical Lasso

Instead of a scalar threshold λ, you can pass a sparse matrix whose sparsity pattern defines which entries to penalize, and whose values define per-entry thresholds. This is useful when the conditional independence structure is partially known.

API Reference

GaussianMarkovRandomFields.graphical_lasso Function
julia
graphical_lasso(X::AbstractMatrix, threshold; blocksize::Int=256, shift=zero(T), alg=nothing)

Learn a GMRF from data by solving the graphical lasso problem:

maximize  log det Ω - tr(ΣΩ) - λ ‖Ω‖₁
such that  Ω is positive definite

where Σ is the sample covariance and Ω is the GMRF's precision matrix.

threshold can be a scalar λ for uniform penalization, or a SparseMatrixCSC for per-entry penalties within a given sparsity pattern (restricted graphical lasso).

shift adds a constant to the diagonal of the sample covariance as regularization to improve convergence.

source