在圖論 中,調和矩陣 (harmonic matrix ),也稱拉普拉斯矩陣 或拉氏矩陣 (Laplacian matrix )、離散拉普拉斯 (discrete Laplacian ),是圖 的矩陣 表示。[ 1]
調和矩陣也是拉普拉斯算子 的離散化 。換句話說,調和矩陣的縮放極限 是拉普拉斯算子 。它在機器學習 和物理學 中有很多應用。
定義
若G是簡單圖 ,G有n個頂點 ,A是鄰接矩陣 ,D是度數矩陣 ,則調和矩陣 是[ 1]
L
=
D
−
A
{\displaystyle L=D-A}
L
i
,
j
:=
{
deg
(
v
i
)
if
i
=
j
−
1
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
{\displaystyle L_{i,j}:={\begin{cases}\deg(v_{i})&{\mbox{if}}\ i=j\\-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}\end{cases}}}
動機
這跟拉普拉斯算子有什麼關係?若f 是加權圖 G的頂點函數,則[ 2]
E
(
f
)
=
∑
w
(
u
v
)
(
f
(
u
)
−
f
(
v
)
)
2
/
2
=
f
t
L
f
{\displaystyle E(f)=\sum w(uv)(f(u)-f(v))^{2}/2=f^{t}Lf}
w是邊的權重函數。u、v是頂點。f = (f(1), ..., f(n)) 是n維的向量 。上面泛函 也稱為Dirichlet泛函。[ 3]
接續矩陣
而且若K是接續矩陣 (incidence matrix),則[ 2]
K
e
v
=
{
1
,
if
v
=
i
−
1
,
if
v
=
j
0
,
otherwise
.
{\displaystyle K_{ev}=\left\{{\begin{array}{rl}1,&{\text{if }}v=i\\-1,&{\text{if }}v=j\\0,&{\text{otherwise}}.\end{array}}\right.}
L
=
K
t
K
{\displaystyle L=K^{t}K}
Kf 是f 的圖梯度 。另外,特徵值滿足
λ
i
=
v
i
T
L
v
i
=
v
i
T
M
T
M
v
i
=
(
M
v
i
)
T
(
M
v
i
)
{\displaystyle {\begin{aligned}\lambda _{i}&=\mathbf {v} _{i}^{\textsf {T}}L\mathbf {v} _{i}\\&=\mathbf {v} _{i}^{\textsf {T}}M^{\textsf {T}}M\mathbf {v} _{i}\\&=\left(M\mathbf {v} _{i}\right)^{\textsf {T}}\left(M\mathbf {v} _{i}\right)\\\end{aligned}}}
舉例
圖
度矩陣
鄰接矩陣
調和矩陣
(
2
0
0
0
0
0
0
3
0
0
0
0
0
0
2
0
0
0
0
0
0
3
0
0
0
0
0
0
3
0
0
0
0
0
0
1
)
{\textstyle \left({\begin{array}{rrrrrr}2&0&0&0&0&0\\0&3&0&0&0&0\\0&0&2&0&0&0\\0&0&0&3&0&0\\0&0&0&0&3&0\\0&0&0&0&0&1\\\end{array}}\right)}
(
0
1
0
0
1
0
1
0
1
0
1
0
0
1
0
1
0
0
0
0
1
0
1
1
1
1
0
1
0
0
0
0
0
1
0
0
)
{\textstyle \left({\begin{array}{rrrrrr}0&1&0&0&1&0\\1&0&1&0&1&0\\0&1&0&1&0&0\\0&0&1&0&1&1\\1&1&0&1&0&0\\0&0&0&1&0&0\\\end{array}}\right)}
(
2
−
1
0
0
−
1
0
−
1
3
−
1
0
−
1
0
0
−
1
2
−
1
0
0
0
0
−
1
3
−
1
−
1
−
1
−
1
0
−
1
3
0
0
0
0
−
1
0
1
)
{\textstyle \left({\begin{array}{rrrrrr}2&-1&0&0&-1&0\\-1&3&-1&0&-1&0\\0&-1&2&-1&0&0\\0&0&-1&3&-1&-1\\-1&-1&0&-1&3&0\\0&0&0&-1&0&1\\\end{array}}\right)}
其他形式
對稱正規化調和矩陣
L
sym
:=
D
−
1
2
L
D
−
1
2
=
I
−
D
−
1
2
A
D
−
1
2
{\displaystyle L^{\text{sym}}:=D^{-{\frac {1}{2}}}LD^{-{\frac {1}{2}}}=I-D^{-{\frac {1}{2}}}AD^{-{\frac {1}{2}}}}
L
i
,
j
sym
:=
{
1
if
i
=
j
and
deg
(
v
i
)
≠
0
−
1
deg
(
v
i
)
deg
(
v
j
)
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
.
{\displaystyle L_{i,j}^{\text{sym}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\sqrt {\deg(v_{i})\deg(v_{j})}}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}
注意[ 4]
λ
=
⟨
g
,
L
sym
g
⟩
⟨
g
,
g
⟩
=
⟨
g
,
D
−
1
2
L
D
−
1
2
g
⟩
⟨
g
,
g
⟩
=
⟨
f
,
L
f
⟩
⟨
D
1
2
f
,
D
1
2
f
⟩
=
∑
u
∼
v
(
f
(
u
)
−
f
(
v
)
)
2
∑
v
f
(
v
)
2
d
v
≥
0
,
{\displaystyle \lambda \ =\ {\frac {\langle g,L^{\text{sym}}g\rangle }{\langle g,g\rangle }}\ =\ {\frac {\left\langle g,D^{-{\frac {1}{2}}}LD^{-{\frac {1}{2}}}g\right\rangle }{\langle g,g\rangle }}\ =\ {\frac {\langle f,Lf\rangle }{\left\langle D^{\frac {1}{2}}f,D^{\frac {1}{2}}f\right\rangle }}\ =\ {\frac {\sum _{u\sim v}(f(u)-f(v))^{2}}{\sum _{v}f(v)^{2}d_{v}}}\ \geq \ 0,}
L
rw
:=
D
−
1
L
=
I
−
D
−
1
A
{\displaystyle L^{\text{rw}}:=D^{-1}L=I-D^{-1}A}
L
i
,
j
rw
:=
{
1
if
i
=
j
and
deg
(
v
i
)
≠
0
−
1
deg
(
v
i
)
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
.
{\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}
例如,離散的冷卻定律 使用調和矩陣[ 5]
d
ϕ
i
d
t
=
−
k
∑
j
A
i
j
(
ϕ
i
−
ϕ
j
)
=
−
k
(
ϕ
i
∑
j
A
i
j
−
∑
j
A
i
j
ϕ
j
)
=
−
k
(
ϕ
i
deg
(
v
i
)
−
∑
j
A
i
j
ϕ
j
)
=
−
k
∑
j
(
δ
i
j
deg
(
v
i
)
−
A
i
j
)
ϕ
j
=
−
k
∑
j
(
ℓ
i
j
)
ϕ
j
.
{\displaystyle {\begin{aligned}{\frac {d\phi _{i}}{dt}}&=-k\sum _{j}A_{ij}\left(\phi _{i}-\phi _{j}\right)\\&=-k\left(\phi _{i}\sum _{j}A_{ij}-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\left(\phi _{i}\ \deg(v_{i})-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\sum _{j}\left(\delta _{ij}\ \deg(v_{i})-A_{ij}\right)\phi _{j}\\&=-k\sum _{j}\left(\ell _{ij}\right)\phi _{j}.\end{aligned}}}
使用矩陣向量
d
ϕ
d
t
=
−
k
(
D
−
A
)
ϕ
=
−
k
L
ϕ
{\displaystyle {\begin{aligned}{\frac {d\phi }{dt}}&=-k(D-A)\phi \\&=-kL\phi \end{aligned}}}
d
ϕ
d
t
+
k
L
ϕ
=
0
{\displaystyle {\frac {d\phi }{dt}}+kL\phi =0}
解是
d
(
∑
i
c
i
v
i
)
d
t
+
k
L
(
∑
i
c
i
v
i
)
=
0
∑
i
[
d
c
i
d
t
v
i
+
k
c
i
L
v
i
]
=
∑
i
[
d
c
i
d
t
v
i
+
k
c
i
λ
i
v
i
]
=
d
c
i
d
t
+
k
λ
i
c
i
=
0
{\displaystyle {\begin{aligned}{\frac {d\left(\sum _{i}c_{i}\mathbf {v} _{i}\right)}{dt}}+kL\left(\sum _{i}c_{i}\mathbf {v} _{i}\right)&=0\\\sum _{i}\left[{\frac {dc_{i}}{dt}}\mathbf {v} _{i}+kc_{i}L\mathbf {v} _{i}\right]&=\\\sum _{i}\left[{\frac {dc_{i}}{dt}}\mathbf {v} _{i}+kc_{i}\lambda _{i}\mathbf {v} _{i}\right]&=\\{\frac {dc_{i}}{dt}}+k\lambda _{i}c_{i}&=0\\\end{aligned}}}
c
i
(
t
)
=
c
i
(
0
)
e
−
k
λ
i
t
{\displaystyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}}
c
i
(
0
)
=
⟨
ϕ
(
0
)
,
v
i
⟩
{\displaystyle c_{i}(0)=\left\langle \phi (0),\mathbf {v} _{i}\right\rangle }
平衡舉動
當
lim
t
→
∞
ϕ
(
t
)
{\textstyle \lim _{t\to \infty }\phi (t)}
的時候,
lim
t
→
∞
e
−
k
λ
i
t
=
{
0
if
λ
i
>
0
1
if
λ
i
=
0
}
{\displaystyle \lim _{t\to \infty }e^{-k\lambda _{i}t}=\left\{{\begin{array}{rlr}0&{\text{if}}&\lambda _{i}>0\\1&{\text{if}}&\lambda _{i}=0\end{array}}\right\}}
lim
t
→
∞
ϕ
(
t
)
=
⟨
c
(
0
)
,
v
1
⟩
v
1
{\displaystyle \lim _{t\to \infty }\phi (t)=\left\langle c(0),\mathbf {v^{1}} \right\rangle \mathbf {v^{1}} }
v
1
=
1
N
[
1
,
1
,
.
.
.
,
1
]
{\displaystyle \mathbf {v^{1}} ={\frac {1}{\sqrt {N}}}[1,1,...,1]}
lim
t
→
∞
ϕ
j
(
t
)
=
1
N
∑
i
=
1
N
c
i
(
0
)
{\displaystyle \lim _{t\to \infty }\phi _{j}(t)={\frac {1}{N}}\sum _{i=1}^{N}c_{i}(0)}
MATLAB代碼
N = 20 ; %The number of pixels along a dimension of the image
A = zeros ( N , N ); %The image
Adj = zeros ( N * N , N * N ); %The adjacency matrix
%Use 8 neighbors, and fill in the adjacency matrix
dx = [ - 1 , 0 , 1 , - 1 , 1 , - 1 , 0 , 1 ];
dy = [ - 1 , - 1 , - 1 , 0 , 0 , 1 , 1 , 1 ];
for x = 1 : N
for y = 1 : N
index = ( x - 1 ) * N + y ;
for ne = 1 : length ( dx )
newx = x + dx ( ne );
newy = y + dy ( ne );
if newx > 0 && newx <= N && newy > 0 && newy <= N
index2 = ( newx - 1 ) * N + newy ;
Adj ( index , index2 ) = 1 ;
end
end
end
end
%%%BELOW IS THE KEY CODE THAT COMPUTES THE SOLUTION TO THE DIFFERENTIAL
%%%EQUATION
Deg = diag ( sum ( Adj , 2 )); %Compute the degree matrix
L = Deg - Adj ; %Compute the laplacian matrix in terms of the degree and adjacency matrices
[ V , D ] = eig ( L ); %Compute the eigenvalues/vectors of the laplacian matrix
D = diag ( D );
%Initial condition (place a few large positive values around and
%make everything else zero)
C0 = zeros ( N , N );
C0 ( 2 : 5 , 2 : 5 ) = 5 ;
C0 ( 10 : 15 , 10 : 15 ) = 10 ;
C0 ( 2 : 5 , 8 : 13 ) = 7 ;
C0 = C0 (:);
C0V = V '* C0 ; %Transform the initial condition into the coordinate system
%of the eigenvectors
for t = 0 : 0.05 : 5
%Loop through times and decay each initial component
Phi = C0V .* exp ( - D * t ); %Exponential decay for each component
Phi = V * Phi ; %Transform from eigenvector coordinate system to original coordinate system
Phi = reshape ( Phi , N , N );
%Display the results and write to GIF file
imagesc ( Phi );
caxis ([ 0 , 10 ]);
title ( sprintf ( 'Diffusion t = %3f' , t ));
frame = getframe ( 1 );
im = frame2im ( frame );
[ imind , cm ] = rgb2ind ( im , 256 );
if t == 0
imwrite ( imind , cm , 'out.gif' , 'gif' , 'Loopcount' , inf , 'DelayTime' , 0.1 );
else
imwrite ( imind , cm , 'out.gif' , 'gif' , 'WriteMode' , 'append' , 'DelayTime' , 0.1 );
end
end
GIF:離散拉普拉斯過程,使用拉普拉斯矩陣
應用
參考文獻
^ 1.0 1.1 Weisstein, Eric W. (編). Laplacian Matrix . at MathWorld --A Wolfram Web Resource. Wolfram Research, Inc. [2020-02-14 ] . (原始內容存檔 於2019-12-23) (英語) .
^ 2.0 2.1 Muni Sreenivas Pydi (ముని శ్రీనివాస్ పైడి)'s answer to What's the intuition behind a Laplacian matrix? I'm not so much interested in mathematical details or technical applications. I'm trying to grasp what a laplacian matrix actually represents, and what aspects of a graph it makes accessible. - Quora . www.quora.com. [2020-02-14 ] .
^ 3.0 3.1 Shuman, David I.; Narang, Sunil K.; Frossard, Pascal; Ortega, Antonio; Vandergheynst, Pierre. The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains . IEEE Signal Processing Magazine. 2013-05, 30 (3): 83–98 [2020-02-14 ] . ISSN 1053-5888 . doi:10.1109/MSP.2012.2235192 . (原始內容存檔 於2020-01-11).
^ Chung, Fan . Spectral Graph Theory . American Mathematical Society. 1997 [1992] [2020-02-14 ] . ISBN 978-0821803158 . (原始內容存檔 於2020-02-14).
^ Newman, Mark . Networks: An Introduction . Oxford University Press. 2010. ISBN 978-0199206650 .
閱讀
T. Sunada. Chapter 1. Analysis on combinatorial graphs. Discrete geometric analysis. P. Exner, J. P. Keating, P. Kuchment, T. Sunada, A. Teplyaev (編). 'Proceedings of Symposia in Pure Mathematics 77 . 2008: 51–86. ISBN 978-0-8218-4471-7 .
B. Bollobás, Modern Graph Theory , Springer-Verlag (1998, corrected ed. 2013), ISBN 0-387-98488-7 , Chapters II.3 (Vector Spaces and Matrices Associated with Graphs), VIII.2 (The Adjacency Matrix and the Laplacian), IX.2 (Electrical Networks and Random Walks).