Tutorial
Introduction
In this tutorial, we're going to explore how to use SelectedInversion.jl to compute the selected inverse of two different sparse symmetric positive definite matrices.
Problem setup
We're going to grab two matrices from the SuiteSparse matrix collection. Refer to the website for more information on these matrices and where they came from.
Feel free to skip ahead, as this part is not related directly to SelectedInversion.jl.
using MatrixMarket, SuiteSparseMatrixCollection
mat_names = ["494_bus", "parabolic_fem"]
ssmc = ssmc_db()
mats_df = ssmc[ssmc.name .∈ Ref(mat_names), :]
paths = fetch_ssmc(mats_df, format = "MM")
paths = [joinpath(path, "$(mats_df.name[i]).mtx") for (i, path) in enumerate(paths)]
A = MatrixMarket.mmread(paths[1]) # 494_bus
B = MatrixMarket.mmread(paths[2]) # parabolic_fem
size(A), size(B)
((494, 494), (525825, 525825))
SelInv
We're going to start by tackling the smaller of the two matrices.
A
494×494 SparseMatrixCSC{Float64, Int64} with 1666 stored entries:
⎡⢟⣵⡀⠈⠡⠀⠀⠀⠀⠊⠀⡀⠊⠂⠀⠁⠐⠀⠀⠀⠀⠈⢂⠀⠀⠀⢀⠀⠀⠐⠀⠀⠀⠔⠚⠄⠀⠀⠀⠀⎤
⎢⡀⠈⠑⣤⠀⠀⠈⠀⡁⠀⠀⠡⡈⢈⡑⠊⡀⠐⠀⠂⡀⡄⠠⡀⠈⠁⡄⠀⠀⠀⠂⠀⠄⠤⠁⠐⠀⠀⠀⠀⎥
⎢⠁⠂⠀⠀⠑⢄⡘⡄⠀⠁⠂⢀⠀⠁⠀⠄⠈⡀⠂⠁⠀⠘⠈⠀⠈⣁⠨⠐⡀⠂⠂⠨⠈⠠⠄⡌⠈⡀⠀⠀⎥
⎢⠀⠀⠂⠀⠒⠬⠻⣦⠀⠂⠁⠐⠀⠀⠂⠁⠀⠁⠀⠄⠂⠁⠂⠂⠀⠀⡄⠀⠐⠠⠀⡖⠀⢈⠂⠂⠐⠀⠀⠀⎥
⎢⡠⠀⠁⠈⠄⠀⠠⠀⠱⣦⡄⠀⠉⠀⠡⡂⠀⠈⠉⠐⠐⡆⠀⢅⠌⠀⠐⢄⠠⠀⠔⠁⠀⠠⢈⠀⠀⠀⠀⠀⎥
⎢⠀⠠⠄⡀⠈⢀⢁⠀⠀⠉⢻⣶⠀⠀⢈⠀⠄⠀⠁⢀⠀⠁⠀⠀⠀⠀⠀⡄⠀⠀⠀⠁⠀⠀⠀⠁⠀⠀⡀⡀⎥
⎢⠪⠀⡂⢈⠄⠀⠀⠀⠃⠀⠀⠀⠛⣤⠐⠀⠁⠈⠠⠢⠂⠀⠀⠀⠀⠀⡀⢄⠠⠐⠀⠚⡂⠤⠌⠈⠀⠠⠀⠀⎥
⎢⠄⠀⡱⠈⠀⠄⠌⠀⠡⠢⠂⠐⠐⠀⠑⣤⠅⠄⠑⠀⠂⠀⠀⠁⠀⠀⠀⠁⠀⠀⠤⠀⠄⠠⠀⠀⠀⡁⠀⠀⎥
⎢⠐⠀⢀⠈⠂⠠⠄⠀⡀⠀⠀⠁⡁⠀⠁⠅⠱⣦⢀⡈⠀⢀⡚⡈⠀⠀⠀⠀⠐⢑⠀⠄⠀⠀⢀⡤⡠⡄⠀⠀⎥
⎢⠀⠀⠠⠀⠌⠀⠀⠄⢃⠀⠁⢀⠠⡂⠑⠀⡀⠰⠑⢄⢀⠀⠈⠠⣤⠢⢠⠈⠀⠀⠁⠂⠍⠨⢀⠂⠈⠀⢂⠀⎥
⎢⡀⠀⠀⠬⣀⠀⠌⠀⠰⠤⠄⠀⠈⠀⠈⠀⠀⢀⠀⠐⠻⢆⡀⠀⠀⡀⠀⠆⠀⢁⡄⠂⢂⢀⠒⠠⠀⠐⠀⠀⎥
⎢⠈⠐⠀⠢⠂⠀⠨⠀⠄⢄⠀⠀⠀⠀⠄⠀⡚⠨⠂⡀⠀⠈⠻⣦⡀⠁⠀⠈⠀⠀⠁⠈⠈⢀⠈⢀⢀⠀⠈⠀⎥
⎢⠀⠀⠆⠀⠆⢠⠀⠀⠂⠁⠀⠀⠀⠀⠀⠀⠀⠀⠠⡛⠀⠠⠄⠈⠿⣧⡁⠀⠀⠈⠂⠄⡐⠀⠜⠀⠀⠀⠀⠘⎥
⎢⠀⠐⠀⠉⢂⠂⠀⠉⠐⢄⠀⠤⠀⢌⠄⠀⠀⠀⡀⠒⠠⠄⡀⠀⠁⠈⠵⣧⡀⠀⡀⠰⠀⢀⠤⠁⠀⠀⠀⠀⎥
⎢⢀⠀⠀⠀⠠⠈⠐⡀⠀⠂⠀⠀⢀⠂⠀⠀⢔⢀⠀⠀⠄⢀⠀⠀⡀⠀⠀⠈⠱⣦⡀⠊⠃⠐⡀⠄⠀⠀⠀⠀⎥
⎢⠀⠀⠈⠀⡈⡀⢠⠤⠔⠁⠄⠀⣠⠀⠀⠃⠀⠄⠡⠀⠠⠉⡁⠀⠈⠄⢀⡈⡠⠈⡑⢌⢈⠐⠐⡄⢀⡀⠀⠀⎥
⎢⢀⠄⠀⡅⠂⡀⡀⢀⠀⡀⠀⠀⠈⡌⠀⡁⠀⠀⡃⡁⠈⢐⠂⢀⠐⠈⠀⢀⢉⠀⢂⠐⠑⣤⡂⡌⠀⠀⠀⠀⎥
⎢⠚⠄⢁⠀⡀⠥⠨⠀⠂⠐⠄⠀⡂⠁⠀⠀⠀⡴⠠⠐⠘⡀⠂⢀⠒⠁⠄⠃⠀⠌⠐⠤⡈⠬⠱⢆⣀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠂⠠⠐⠀⠀⠀⠀⠀⠀⡀⠄⠠⠀⠮⠂⠀⢀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠰⠀⠀⠀⠘⠱⢆⣀⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠨⠀⠀⠀⠀⠀⠀⠈⠐⠀⠀⠂⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠱⣦⎦
Let's compute the selected inverse of A
.
using SelectedInversion
Z, p = selinv(A)
Z
494×494 LinearAlgebra.Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}:
0.571508 0.555408 ⋅ ⋅ … ⋅ ⋅ ⋅
0.555408 0.555408 0.532709 ⋅ ⋅ ⋅ ⋅
⋅ 0.532709 0.532709 0.308615 ⋅ ⋅ ⋅
⋅ ⋅ 0.308615 0.308618 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.156681
⋮ ⋱
⋅ ⋅ ⋅ ⋅ … 0.144949 0.142319 0.138329
⋅ ⋅ ⋅ ⋅ 0.142853 0.145692 0.156411
⋅ ⋅ ⋅ ⋅ 0.165404 0.164882 0.157833
⋅ ⋅ ⋅ ⋅ 0.165435 0.163997 0.158009
⋅ ⋅ ⋅ 0.225619 0.164101 0.163763 0.156825
⋅ ⋅ ⋅ ⋅ … 0.161498 0.160675 0.15752
⋅ ⋅ ⋅ ⋅ 0.170756 0.162512 0.155864
⋅ ⋅ ⋅ ⋅ 0.162512 0.227046 0.16517
⋅ ⋅ ⋅ ⋅ 0.155864 0.16517 0.199368
Now the nonzero entries of Z
correspond to entries in inv(A)
.
p
is a permutation vector. Sparse Cholesky factorizations reorder the rows and columns of a matrix to reduce fill-in in the Cholesky factor. selinv
computes its entries according to this permutation.
Concretely, this means that if we want to get a specific entry of the inverse, we need to apply the correct permutation first.
To test this, we're going to compute the dense full inverse. This is still feasible for such a small matrix, but not recommended in general.
A_inv = inv(Array(A))
494×494 Matrix{Float64}:
0.000454823 0.000455516 0.000455591 … 0.000455513 0.000455513
0.000455516 0.359149 0.168952 0.167124 0.167124
0.000455591 0.168952 0.763335 0.166584 0.166584
0.000455516 0.174329 0.168952 0.167124 0.167124
0.000455568 0.16706 0.181584 0.165555 0.165555
0.000455593 0.168953 0.280338 … 0.166584 0.166584
0.000455497 0.160492 0.160623 0.161281 0.161281
0.000455516 0.174329 0.168952 0.167124 0.167124
0.000455506 0.164428 0.166083 0.165823 0.165823
0.000455514 0.168785 0.170365 0.166959 0.166959
⋮ ⋱
0.000455516 0.165192 0.164654 … 0.163712 0.163712
0.000455493 0.159101 0.158863 0.15906 0.15906
0.000455513 0.167124 0.166584 0.178322 0.178252
0.000455513 0.167124 0.166584 0.178322 0.178251
0.000455513 0.167124 0.166584 0.178322 0.178251
0.000455513 0.167124 0.166584 … 0.178322 0.178251
0.000455513 0.167124 0.166584 0.178322 0.178251
0.000455513 0.167124 0.166584 0.182867 0.173707
0.000455513 0.167124 0.166584 0.173707 0.182867
Compare the values of Z
and A_inv
at some arbitrary index. They're not going to match:
A_inv[42, 172], Z[42, 172]
(0.6254934620417878, 0.0)
But if we permute Z first, they do match:
A_inv[42, 172], Z[invperm(p), invperm(p)][42, 172]
(0.6254934620417878, 0.6254934620414534)
If your use case calls for this kind of depermuted access, you can make life easier with the depermute
keyword:
Z, _ = selinv(A; depermute=true)
Z
494×494 SparseMatrixCSC{Float64, Int64} with 2334 stored entries:
⎡⢟⣵⡀⡈⠡⠀⠀⠀⢁⠊⠀⡀⠊⣂⡄⠁⠘⠈⡀⢀⠀⠨⣃⠀⠀⠁⢀⢀⠀⠐⠀⠀⢠⢔⠚⠄⠀⠀⠀⠀⎤
⎢⡀⠨⠑⣤⠀⠐⠨⠀⡁⠀⠀⠡⡈⢈⡑⠊⡀⠰⠄⠂⡀⡄⠠⣀⠈⠁⡄⠀⠀⠀⠂⠀⠄⠤⠁⠑⠀⠀⠀⠀⎥
⎢⠁⠂⢀⠀⠑⢄⣘⡄⠀⠑⠂⢀⠀⡁⡂⠄⠈⡈⢂⠁⠀⢘⠈⠁⠈⣡⡩⢐⡀⢂⡂⣨⠌⢰⢄⣌⢈⡀⠀⠀⎥
⎢⠀⠀⠂⠂⠒⠼⠻⣦⠀⠊⠁⠐⠂⠂⠂⠁⠐⠁⠠⠆⠂⠓⠂⠂⠂⠂⡆⠐⠐⠰⠀⡖⠀⢸⠖⠒⠐⠀⠀⠀⎥
⎢⡡⠐⠁⠈⢄⠀⡠⠀⠱⣦⡄⠀⠉⠈⠡⡂⠀⠉⠉⠐⠐⡆⠀⢥⠌⠁⠑⢤⠠⠀⠔⠁⠀⠰⢈⠀⠀⠀⠀⠀⎥
⎢⠀⠠⠄⡀⠈⢀⢁⠀⠀⠉⢻⣶⢀⡀⢈⠀⠄⠀⢁⢀⠀⠁⠀⠈⠀⡀⡀⡄⠀⢀⠀⠁⠀⠀⢀⠁⠀⠀⡀⡀⎥
⎢⠪⢠⡂⢈⠄⠠⠨⠀⡃⠀⠀⠰⣛⣼⠖⠐⠑⢈⠠⠲⠂⠠⠄⠙⠂⠄⡄⢤⠠⠰⠂⠚⡂⠤⠬⠈⠠⠢⠄⠀⎥
⎢⠄⠉⡱⠈⠈⠌⠌⠀⠡⠢⠂⠐⢘⠁⠑⣤⠍⠄⠑⠁⠢⠈⠁⠁⠁⠁⠁⠉⠀⠈⠤⠈⠌⠨⠨⠀⠈⡁⠀⠀⎥
⎢⡒⠀⢀⡈⡂⠠⠔⠀⡄⠀⠀⠁⡑⢀⠃⠅⢱⣶⢀⡊⠀⢐⡚⣉⠒⡂⡂⠀⠐⢑⡀⠌⠂⢀⢐⡤⣰⡄⠀⠀⎥
⎢⠀⢈⠠⠁⠌⠐⠠⠆⢃⠀⠁⢐⢠⡂⠕⠀⡠⠰⠑⢄⢀⠀⠉⠠⣤⠢⢠⠈⠀⠰⠅⢢⠍⠨⢀⠆⠈⠀⢂⠀⎥
⎢⡀⡀⠀⠬⣀⢀⢬⠀⠰⠤⠄⠀⠈⡀⡈⠂⢀⢀⠀⠐⠻⢆⡀⠤⣀⡀⡀⢆⠂⢁⡆⢂⢒⢸⢒⠠⢀⠐⠐⠀⎥
⎢⠉⠘⠀⢢⠆⠀⠨⠀⠄⣄⡀⠀⣄⠁⠅⠀⡞⢨⠃⡀⠀⡌⠻⣦⡀⠁⠄⠈⠀⠀⠁⢈⠈⢈⠈⢁⢀⡀⠈⠀⎥
⎢⠄⠀⠆⠀⠆⣠⠨⠀⠆⠁⠀⠠⠈⠄⠅⠀⠸⠠⠠⡛⠀⠸⠄⠈⠿⣧⡅⠀⠀⠨⠃⠄⡐⢠⠼⡆⠨⠀⠀⠙⎥
⎢⠀⢐⠀⠉⢃⢊⢈⠉⠑⣄⠀⠬⠀⣍⡅⠀⠈⠈⡀⠒⠠⢌⡀⠁⠁⠉⠵⣧⡀⠈⡀⠰⠀⢨⢬⠁⢈⠀⠀⠀⎥
⎢⢀⠀⠀⠀⠠⢈⢐⡀⠀⠂⠀⢀⢀⡂⡀⠀⢔⢀⢀⡀⠌⢀⠀⠀⡀⡀⡀⠈⠱⣦⡁⢊⠃⠸⣈⠄⢀⠀⠀⠀⎥
⎢⠀⠀⠈⠀⡈⣨⢠⠤⠔⠁⠄⠀⣨⠀⡀⠃⡀⠌⠡⣁⠨⢉⡁⢀⠉⠄⢀⡈⡡⢈⡑⢌⢈⠐⠐⡄⢈⡁⠀⠀⎥
⎢⢀⢖⠀⡅⢂⣁⣀⣀⢀⡀⠀⠀⠈⡌⡂⡁⠈⢀⡃⡁⣘⣐⡂⢀⠐⣈⡀⣀⣉⡀⢂⠐⠑⣤⣂⡌⠀⠀⠀⠀⎥
⎢⠚⠄⢅⠀⡀⢵⢸⠁⠂⠐⠄⠐⡂⠃⠂⠂⠐⡴⠠⠔⠘⡐⠆⢀⠲⠧⠆⠓⠂⠜⠐⠤⡈⠼⠵⢇⣐⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠂⠰⠐⠀⠀⠀⠀⠀⠠⡂⠆⠠⠐⠾⠂⠀⢀⠐⠀⠰⠂⠂⠂⠐⠀⠐⠆⠰⠀⠀⠐⠘⠻⢆⣀⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠨⠀⠁⠀⠀⠀⠀⠈⠐⠐⠀⠂⠀⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠱⣦⎦
Now the nonzero entries of Z
directly give you the corresponding entries of inv(A)
.
A_inv[42, 172], Z[42, 172]
(0.6254934620417878, 0.6254934620414534)
Supernodal setting
Now, let's tackle the bigger of the two matrices.
B
525825×525825 SparseMatrixCSC{Float64, Int64} with 3674625 stored entries:
⎡⠑⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠦⣔⡦⣤⣤⣄⣀⡻⠿⢷⣒⣙⠛⠛⠛⎤
⎢⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⢦⡀⠀⠐⠲⠤⣄⣀⠀⠀⠀⠀⠀⠀⠐⠦⣄⡉⠉⠛⠛⠛⠃⠀⠀⠀⠈⠉⠙⠒⎥
⎢⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⠈⠉⠓⠲⠤⣄⣀⠀⠀⠀⠉⠓⠦⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠙⠲⣄⡘⠒⠒⠦⠤⣄⣀⣈⠉⠀⠀⠀⠀⠀⠀⠉⠓⠦⣄⡀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⠉⠓⠮⢭⣷⣶⣶⣾⣭⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠦⣄⡀⎥
⎢⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠻⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠙⠲⢤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⎥
⎢⠀⠀⢀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠻⣦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡀⠀⠀⠈⠙⠲⢤⣀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠘⡆⠀⠀⠘⢦⠀⠀⠀⠀⠀⠈⠑⣤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠈⠳⢤⡀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⢹⡀⠀⢲⠈⢧⠀⠀⠀⠀⠀⠀⠈⠻⢆⠀⠀⠀⠀⠀⠀⠀⠀⠙⣆⠀⠀⠀⠀⢦⠀⠀⠙⢦⡀⠀⠀⎥
⎢⠀⠀⠀⠀⢧⠀⠸⡄⡎⣇⠀⠀⠀⠀⠀⠀⠀⠀⠻⣦⡀⠀⠀⠀⠀⠀⠀⠈⢧⡀⠀⠀⠈⢧⠀⠀⡆⠙⣆⠀⎥
⎢⠀⠀⠀⠀⠘⡆⠀⢧⢹⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⣤⡀⠀⠀⠀⠀⠀⠀⠳⡄⠀⠀⠈⣇⠀⢹⠸⣼⡆⎥
⎢⠀⠀⠀⠀⠀⢹⡀⢸⣸⣿⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠻⢆⠀⠀⠀⠀⠀⠀⠙⣆⠀⠀⠘⣆⠈⣇⣿⣷⎥
⎢⠰⡄⠰⡄⠀⠀⠃⠀⠃⠛⣇⠀⠀⠰⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠛⣤⠀⠀⠀⠀⠀⠈⠧⣄⡀⠘⠀⠘⠘⠛⎥
⎢⠰⡽⡄⠹⡄⠀⠀⠀⠀⠀⠘⣆⠀⠀⠀⠉⠳⢤⡀⠀⠀⠀⠀⠀⠀⠀⠛⣤⠀⠀⠀⠀⢦⡀⠉⠓⢦⣀⠀⠀⎥
⎢⠀⣿⣧⠀⠹⡄⠀⠀⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠉⠳⢤⡀⠀⠀⠀⠀⠀⠀⠛⣤⠀⠀⠀⠳⡄⠀⢧⢨⣳⡀⎥
⎢⠀⢹⣿⠀⠀⠹⡄⠀⠀⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠀⠀⠉⠳⢤⡀⠀⠀⠀⠀⠀⠛⣤⠀⠀⠙⣆⠈⣇⢿⣷⎥
⎢⣿⡎⠉⠀⠀⠀⠹⡄⠀⠀⠀⠀⠀⠘⢦⡀⠈⠓⠦⣄⡀⠀⠀⠀⠉⢧⠈⠳⢤⡀⠀⠀⠑⢄⠀⠈⣗⢮⣌⠉⎥
⎢⢹⢳⠀⠀⠀⠀⠀⠹⡄⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠉⠙⠲⢤⣀⠈⢧⠀⠀⠉⠳⢤⡀⠀⠛⢄⠘⢮⢿⣷⎥
⎢⣷⠘⡆⠀⠀⠀⠀⠀⠹⡄⠀⠀⠀⠀⠀⠀⠈⠳⣌⠉⣓⡒⠦⢤⣀⠀⠈⢳⡉⣓⠦⢤⡹⣝⡲⣄⣑⣼⢿⣦⎥
⎣⣿⠀⢳⠀⠀⠀⠀⠀⠀⠹⡄⠀⠀⠀⠀⠀⠀⠀⠈⠙⠲⠿⢿⣿⣶⠀⠀⠀⠙⠺⢿⣷⡆⠙⢿⣷⠻⣷⣿⣿⎦
We could directly apply selinv
to B
. But if we have access to a Cholesky factorization of B
, we can pass that to selinv
instead, which is going to be faster because selinv
would have computed a Cholesky internally anyways.
So just to prove a point, let's first compute a Cholesky factorization:
using LinearAlgebra
C = cholesky(B)
SparseArrays.CHOLMOD.Factor{Float64, Int64}
type: LLt
method: supernodal
maxnnz: 0
nnz: 41962442
success: true
As we can see, this is a supernodal Cholesky factorization. Supernodal factorizations chunk contiguous columns with an identical sparsity pattern. Computations may then leverage BLAS for these chunks, which can speed things up quite a lot.
SelInv also uses the supernodal structure internally. As a result, the return type of Z
is now different. Let's compute the selected inverse.
Z, p = selinv(C; depermute=true)
Z
525825×525825 SupernodalMatrix:
43.4286 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0
0.0 21.7203 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 43.4276 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 43.4286 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 13.1161 16.7205
0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
⋮ ⋱
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 21.7203 14.4534 0.0 0.0
0.0 0.0 0.0 0.0 14.4534 21.7203 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 21.7203 14.4534
0.0 0.0 0.0 0.0 0.0 0.0 14.4534 21.7203
Z
is now a SupernodalMatrix
, which is a custom type defined in SelectedInversion.jl.
It's a subtype of AbstractMatrix
, so you can index into it as you would expect. Let's check the value of some arbitrary entry.
e5 = zeros(size(B, 2))
e5[5] = 1.
(B \ e5)[end], Z[end, 5]
(16.720493403782914, 16.720493403782914)
The diagonal might be particularly relevant to some applications:
diag(Z)
525825-element Vector{Float64}:
43.428594188883665
21.720312275144835
43.42757082606358
43.42859418888121
21.720312275144153
43.42757082606434
21.808399205659505
11.017148901424683
21.89769199236264
10.954514208803069
⋮
21.720464508486046
21.72043068182586
21.720401136624524
21.72037585161697
21.720354808609045
21.720337992459942
21.72032539107257
21.720316995382166
21.720312799349735
It's also possible to convert Z
into a sparse matrix. But this is fairly slow and eats up a lot of memory. As such, it should be avoided unless it's truly necessary.
using SparseArrays
sparse(Z)
525825×525825 SparseMatrixCSC{Float64, Int64} with 83399059 stored entries:
⎡⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎤
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣗⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣟⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣟⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣟⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣯⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⎥
⎢⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⎥
⎣⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢿⢿⣿⢿⣿⣿⣿⣿⣿⣿⣿⢿⣿⢿⣿⣿⣿⣿⡿⣿⢿⣿⠿⣿⣿⣿⎦
Conclusion
SelectedInversion.jl lets you compute the selected inverse of a sparse symmetric positive definite matrix efficiently. Where applicable, it makes use of supernodal factorizations and thus scales to matrices with more than a million columns.
As of now, it does not support unsymmetric matrices and does not explicitly make use of parallelization along the elimination tree. If you're interested in helping develop these features, feel free to open an issue or a pull request on GitHub.
This page was generated using Literate.jl.