# Coco notes — Algebraic graph calculus

*by*Phil Tadros

$newcommand{1}{mathbf{1}}$

$newcommand{R}{mathbf{R}}$

We describe a graph-theoretic analogue of vector calculus. The linear

operators of vector calculus (gradient, divergence, laplacian) correspond to

the matrices naturally related to graphs (incidence matrix, adjacency

matrix). This analogy is helpful for formalizing the discretization of some

issues in picture and floor processing which are typically outlined in a steady

setting.

## 1. Reminder of vector calculus

Vector calculus offers with capabilities and vector fields outlined in $R^3$.

### 1.1. Features and vector fields

A *operate* (or *scalar discipline*) is a map $u:R^3toR$.

A *vector discipline* is a map $mathbf{v}:R^3toR^3$.

Vector fields are written in daring.

Allow us to repair some typical names for the coordinates.

The coordinates of some extent in $R^3$ are written as $(x,y,z)$.

If $mathbf{v}$ is a vector discipline, then $mathbf{v}=(a,b,c)$ the place $a$, $b$

and $c$ are three scalar fields referred to as the elements of $mathbf{v}$.

We denote the partial derivatives of a operate utilizing subindices, for

instance $a_y:=frac{partial a}{partial y}$.

### 1.2. Differential operators

The *gradient* of a operate $u$ is a vector discipline $nabla u$ outlined

by

[

nabla u = left(

u_x , u_y , u_z

right)

]

The *divergence* of a vector discipline $mathbf{u}=(a,b,c)$ is a scalar

discipline $mathrm{div}(mathbf{u})$ outlined by

[

mathrm{div}(mathbf{u}) =

a_x + b_y + c_z

]

The *curl* of a vector discipline $mathbf{u}=(a,b,c)$ is one other vector

discipline $mathrm{curl}(mathbf{u})$ outlined by

[

mathrm{curl}(mathbf{u}) =

left(

c_y – b_z , a_z – c_x , b_x – a_y

right)

]

Lastly, the *laplacian* of a scalar discipline $u$ is the scalar discipline

$Delta u$ outlined by

[

Delta u = u_{xx} + u_{yy} + u_{zz}.

]

Discover that, apart from the curl, all these operations will be outlined in

$R^N$. Nonetheless, the curl is restricted to a few dimensions. There’s a

related operator in two dimensions, which we name additionally the curl and computes

a scalar discipline $mathrm{curl}(mathbf{u})$ from a vector discipline

$mathbf{u}=(a,b):R^2toR^2$

[

mathrm{curl}(mathbf{u}) = b_x – a_y

]
Discover that it’s the final part of the 3D curl.

The curl can be outlined in dimension 7.

Let $mathbf{u}=(u^1,ldots,u^7)$ be a vector discipline in $R^7$, then

[

defcurlco#1#2#3#4#5#6{

{u^{#1}}_{#2}-{u^{#2}}_{#1}+

{u^{#3}}_{#4}-{u^{#4}}_{#3}+

{u^{#5}}_{#6}-{u^{#6}}_{#5}

}

mathrm{curl}(mathbf{u}) =

left(

begin{matrix}

curlco{2}{4}{3}{7}{5}{6}

curlco{3}{5}{4}{1}{6}{7}

curlco{4}{6}{5}{2}{7}{1}

curlco{5}{7}{6}{3}{1}{2}

curlco{6}{1}{7}{4}{2}{3}

curlco{7}{2}{1}{5}{3}{4}

curlco{1}{3}{2}{6}{4}{5}

end{matrix}

right)

]
the place a sub-index $i$ denotes a partial spinoff within the $i$-th dimension

of $R^7$. And analogously we are able to outline the 6-dimensional curl by taking the final part (leading to a scalar discipline).

### 1.3. Differential identities and properties

A very powerful identification is $Delta u = mathrm{div}(mathrm{grad}(u))$,

that can be utilized additionally because the definition of $Delta$.

Different identities involving the curl are

$mathrm{curl}(nabla u)=0$ and

$mathrm{div}(mathrm{curl}(mathbf{u}))=0$.

The capabilities $u$ such that $nabla u=0$ on $R^3$ are the constants.

The vector fields $mathbf{v}$ such that $mathrm{curl}(mathbf{v})=0$ are

referred to as *conservative*, *irrotational* or *integrable*.

They’re of the shape $mathbf{v}=nabla u$ for some operate $u$ referred to as the

*potential* of $mathbf{v}$.

The vector fields $mathbf{v}$ such that $mathrm{div}(mathbf{v})=0$ are referred to as

*divergence-free*, *volume-preserving*, *solenoidal* or

*incompressible*.

They’re of the shape $mathbf{v}=mathrm{curl}(mathbf{u})$ for some vector

discipline $mathbf{u}$ referred to as the

*vector potential* of $mathbf{v}$.

The scalar fields $u$ such that $Delta u=0$ are referred to as

*harmonic capabilities*.

The next identities are speedy purposes of the product rule for

derivatives:

[ nabla(fg) = fnabla g + gnabla f ]
[ mathrm{div}(fmathbf{g}) = fmathrm{div}(mathbf{g}) + mathbf{g}cdotnabla f ]

### 1.4. Integral calculus

The divergence theorem:

[

int_Omega mathrm{div}(mathbf{g}) =

int_{partialOmega}mathbf{g}cdotmathbf{ds}

]

Combining the divergence theorem with the product rule we get hold of the

integration by elements system.

[

int_{partialOmega} fmathbf{g}cdotmathbf{ds} =

int_Omega

fmathrm{div}(mathbf{g})

+

int_Omega

mathbf{g}cdotnabla f

]

Thus, if at the least one of many two capabilities vanishes on the boundary of

$Omega$

[

0=

int_Omega

fmathrm{div}(mathbf{g})

+

int_Omega

mathbf{g}cdotnabla f

]
or, in one other notation

[

leftlangle

f, mathrm{div}(mathbf{g})

rightrangle

=

leftlangle

-nabla f, mathbf{g}

rightrangle

]
thus that the operators $mathrm{div}$ and $-nabla$ are adjoint to every

different. Integrating by elements twice we get hold of that the operator $Delta$ is

self-adjoint.

## 2. Graphs and their matrices

A *graph* is $G=(V,E)$ the place $V$ is a set referred to as the

*vertices* of $G$, and $E$ is a subset of $Vtimes V$ referred to as the

*edges* of $G$.

We assume all the time that the set $V$ is finite, and its components are numbered

from $1$ to $n$. Thus, the set $E$ can be finite (the cardinal is at most

$n^2$) and we assume that the weather of $E$ are numbered from $1$ to $m$.

$displaystylebegin{matrix}

V = {1,2,3,4,5,6}

E = { {1,2},{1,3},{2,4},{3,4},{4,5},{5,6},{4,6} }

finish{matrix}$

### 2.1. The adjacency checklist

Given a graph of $n$ vertices and $m$ edges,

the adjacency checklist is a matrix

of $m$ rows and $2$ columns that accommodates the pairs of vertices related by

every edge. The entries of this matrix are integers on the set

${1,ldots,n}$. Thus, if the $okay$-th row is $(i,j)$, because of this edge

$okay$ connects vertices $i$ to $j$.

$

textrm{adjacency checklist} =

start{pmatrix}

1 & 2

1 & 3

2 & 4

3 & 4

4 & 5

5 & 6

4 & 6

finish{pmatrix}

$

The adjacency checklist is a really environment friendly illustration for sparse graphs (the place

the variety of edges is proportional to the variety of vertices).

Nonetheless, it’s not very attention-grabbing from the algebraic perspective.

We are going to see within the following three different matrices which have a really wealthy

algebraic interpretation.

### 2.2. The adjacency matrix $A$

Given a graph of $n$ vertices and $m$ edges,

the adjacency matrix is a sq. matrix $A=a_{ij}$ of dimension $ntimes n$.

The entries of $A$ are zeros and ones, with $a_{ij}=1$ if there may be an edge

from $i$ to $j$ and $a_{ij}=0$ in any other case.

$

A =

start{array}lllllll

Vbackslash V

& 1 & 2 & 3 & 4 & 5 & 6

hline

1 & 0 & 1 & 1 & 0 & 0 & 0

2 & 1 & 0 & 0 & 1 & 0 & 0

3 & 1 & 0 & 0 & 1 & 0 & 0

4 & 0 & 1 & 1 & 0 & 1 & 1

5 & 0 & 0 & 0 & 1 & 0 & 1

6 & 0 & 0 & 0 & 1 & 1 & 0

finish{array}

$

Discover that this matrix has considerably much less data than the adjacency

checklist, as a result of the ordering of the perimeters is misplaced. Thus, there’s a distinctive approach

to compute the adjacency matrix from the checklist, however many $m!$ other ways

to get the checklist from the matrix. We will selected an arbitrary canonical

ordering of the perimeters (for instance, in lexicographic order).

### 2.3. The Laplacian matrix $L$

Let $A$ be the adjacency matrix of a graph $G$.

If we sum the values of all the weather of the $i$-th row, we get hold of the

variety of edges going out of vertex $i$ (referred to as the diploma of the sting).

Allow us to put the vector with all of the levels within the diagonal of a matrix $D$; in

octave/matlab notation $mathtt{D=diag(sum(A))}$.

The Laplacian matrix of $G$ is outlined as

[

L = A – mathtt{diag}(mathtt{sum}(A))

]
Within the typical case the place $A$ is symmetric with 0 on the diagonal, the matrix

L is identical as A with minus the diploma of every vertex on the diagonal

entries.

$

L =

start{array}lllllll

Vbackslash V

& 1 & 2 & 3 & 4 & 5 & 6

hline

1 &-2 & 1 & 1 & 0 & 0 & 0

2 & 1 &-2 & 0 & 1 & 0 & 0

3 & 1 & 0 &-2 & 1 & 0 & 0

4 & 0 & 1 & 1 &-4 & 1 & 1

5 & 0 & 0 & 0 & 1 &-2 & 1

6 & 0 & 0 & 0 & 1 & 1 &-2

finish{array}

$

### 2.4. The incidence matrix $B$

Given a graph of $n$ vertices and $m$ edges,

the incidence matrix is an oblong matrix $B=b_{ij}$ of $m$ rows and $n$

columns. The entries of $B$ are zeros, ones and minus ones given by the

edges of the graph: if the $okay$-th edge goes from $i$ to $j$, then, on the

$okay$th row there are values $-1$ and $1$ on positions $i$ and $j$

respectively; there are zeros all over the place else.

$

B =

start{array}lllllll

Ebackslash V

& 1 & 2 & 3 & 4 & 5 & 6

hline

1 &-1 & 1 & 0 & 0 & 0 & 0

2 &-1 & 0 & 1 & 0 & 0 & 0

3 & 0 &-1 & 0 & 1 & 0 & 0

4 & 0 & 0 &-1 & 1 & 0 & 0

5 & 0 & 0 & 0 &-1 & 1 & 0

6 & 0 & 0 & 0 & 0 &-1 & 1

7 & 0 & 0 & 0 &-1 & 0 & 1

finish{array}

$

Discover that the incidence matrix accommodates the identical data because the

adjacency checklist (together with the order of the perimeters).

There’s an attention-grabbing relationship between the incidence matrix and the

Laplacian matrix, that may be checked algebraically:

[

L = -B^TB

]
This identification is the discrete analogue of $Delta=mathrm{div grad}$,

as we’ll clarify beneath.

### 2.5. The unsigned incidence matrix $C$

The incidence matrix $B$ outlined above is signed, on every row there are two

non-zero entries whose values are $-1$ and $1$. Thus the sum of any row is

zero.

We will write the matrix $B$ as $B=B_1-B_0$, the place the matrices

$B_0$ and $B_1$ have solely zeros and ones, with a single non-zero entry per

row.

It is going to be helpful later to think about the *unsigned incidence matrix*

$C$, outlined as $C=frac{1}{2}(B_0 + B_1)$, or equivalently

$C=frac{1}{2}|B|$. The rows of the matrix $C$ sum to at least one.

The next relations are speedy to confirm

[

A = 2C^TC-B^TB/2

]
[

mathrm{deg} = 2C^TC+B^TB/2

]
the place $mathrm{deg}$ is an $ntimes n$ diagonal matrix, whose values are the

levels of every vertex.

## 3. Vector calculus on graphs

Many of the constructions that we have now described on the vector calculus

reminder above have a direct correspondence within the case of graphs.

### 3.1. Analogies

The correspondence between vector calculus and graph idea is specified by

the next desk.

The principle thought is that scalar fields correspond to

capabilities outlined on vertices, and vector fields correspond to capabilities

outlined on edges.

Vector calculus |
Graph idea |

Base house |
Graph vertices $V$ |

Tangent house |
Graph edges $E$ |

$u:OmegatoR$ |
$u:VtoR$ |

$mathbf{v}:OmegatoR^3$ |
$mathbf{v}:EtoR$ |

Laplacian operator $Delta$ |
Laplacian matrix $Linmathcal{M}_{n,n}(R)$ |

gradient operator $nabla$ |
incidence matrix $Binmathcal{M}_{m,n}(R)$ |

divergence operator $mathrm{div}$ |
matrix $-B^Tinmathcal{M}_{n,m}(R)$ |

$Delta=mathrm{div grad}$ |
$L=-B^T B$ |

scalar discipline $u$ |
$uinR^n$ |

vector discipline $mathbf{v}$ |
$mathbf{v}inR^m$ |

vector discipline $nabla u$ |
$BuinR^m$ |

scalar discipline $Delta u$ |
$LuinR^n$ |

scalar discipline $mathrm{div}(mathbf{v})$ |
$-B^Tmathbf{v}inR^n$ |

directional spinoff $nabla |
$nabla u (a,b)$ |

$OmegasubseteqR^3$ |
$Omegasubseteq V$ |

$partialOmegasubseteqR^3$ |
$partialOmegasubseteq E$ , outlined as $partialOmega=Ecap(OmegatimesOmega^c)$ |

$displaystyleint_Omegamathrm{div}(mathbf{v}) |
$displaystylesum_{ainOmega}mathrm{div}(mathbf{v})(a) = sum_{einpartialOmega}mathbf{v}(e)$ |

Elliptic PDE $Delta u = f$ |
Linear system $Lu=f$ |

Parabolic PDE $u_t = Delta u$ |
First-order Linear ODE System $u_t=Lu$ |

$textrm{div}(Dnabla u),qquad |
$-B^TDBu,qquad Dinmathcal{M}_{m,m}$ |

$gDelta u,qquad g:OmegatoR$ |
$GLu,qquad Ginmathcal{M}_{n,n}$ |

pointwise product $u v$ |
Hadamard product $fodot g$ |

pointwise product $umathbf{v}$ |
Hadamard product $Cfodot g$ |

$nabla fg=fnabla g + gnabla f$ |
$B(fcirc g)=Cfodot Bg + Cgodot Bf$ |

(nothing) |
unsigned incidence matrix $Cinmathcal{M}_{m,n}(R)$ |

The $mathrm{curl}$ operator can’t be outlined on normal graphs, nevertheless it

will be outlined on *planar* graphs, and it satisfies related identities.

### 3.2. The graph Laplacian

The only operator of vector calculus is the Laplacian, remodeling

scalar fields into scalar fields. It’s the easiest as a result of no vector fields

are concerned, solely scalar fields.

Correspondingly, the best operator for graphs can be the Laplacian,

remodeling capabilities outlined on vertices into capabilities outlined on

vertices. It’s the easiest as a result of no capabilities outlined on edges are

concerned. As soon as we have now chosen an ordering of the vertices, a scalar discipline is

merely a vector $uinR^n$, and the Laplacian operator is outlined by a sq.

matrix of dimension $ntimes n$.

Let $G=(V,E)$ be a graph and $u:VtoR$ be a scalar discipline. The

**Laplacian**

of $u$ is denoted by $Delta u$ and is outlined because the scalar discipline $Delta u:VtoR$

[

Delta u(a) := sum_{(a,b)in E} u(b)-u(a)

]
Discover that the sum is carried out for a set vertex $a$, and $b$ varies

by way of all of the neighbors of $a$ within the graph.

The scalar discipline $u$ | The scalar discipline $Delta u$ |

Simply from the definition, we are able to deduce a number of properties of the laplacian

- The sum of all of the values of $Delta u$ is all the time zero
- If $u(a)$ is an area most, then $Delta u(a)<0$
- If $u(a)$ is an area minimal, then $Delta u(a)>0$
- If $u$ is fixed, then $Delta u$ is zero

If we repair an ordering of the vertices, then the scalar fields $u$ and $Delta

u$ are two vectors in $R^n$, and the linear operator $umapstoDelta u$ is

given by the matrix $L=A-mathtt{diag}(mathtt{sum}(A))$. This follows

straight by decomposing the definition of $Delta$ into two sums:

[

Delta u(a)

=

sum_{(a,b)in E}

u(b)

–

sum_{(a,b)in E}

u(a)

=

–

u(a)mathrm{degree}(a)

+sum_{(a,b)in E}

u(b)

]

Discover that the Laplacian has a pleasant interpretation.

If we regard the values of $u$ as a amount distributed on the vertices of

the graph, the situation $Delta u = 0$ says that the amount is distributed

evenly, or in equilibrium: the quantity of amount at every vertex equals the

common quantity over its neighbours. Particularly, if $u$ is fixed then

$Delta u = 0$.

Discover that the matrix $L$ is all the time singular: a relentless vector is an

eigenvector of eigenvalue 0. If the graph has $okay$ related elements, then

we have now null vectors which are fixed on every related part, thus the

matrix $L$ has rank $n-k$.

### 3.3. Graph gradient and graph divergence

Recall that scalar fields are capabilities outlined on vertices and vector fields

are capabilities outlined on edges. Thus, the gradient transforms a operate

outlined on vertices right into a operate outlined on edges. There’s a very

pure approach of doing that: the worth at every edge is obtained because the

distinction between the values at either side of the sting.

The scalar discipline $u$ | The vector discipline $nabla u$ |

Extra formally,

let $G=(V,E)$ be a graph and $u:VtoR$ be a scalar discipline.

The **gradient** of $u$ is the vector discipline $nabla u:EtoR$ outlined by

[

nabla u(a,b) := u(b) – u(a)

qquad mathrm{for} (a,b)in E

]
The matrix of this linear map is the incidence matrix $B$ of the graph.

Consider the gradient $nabla u(a,b)$ because the directional spinoff of $u$

at level $a$ within the route of the vector from $a$ to $b$.

Now let $mathbf{v}:EtoR$ be a vector discipline.

The **divergence** of $mathbf{v}$ is the scalar discipline

$mathrm{div}(mathbf{v}):VtoR$ outlined by:

[

mathrm{div}(mathbf{v})(a)

:=

sum_{(a,b)in E}mathbf{v}(a,b)

-sum_{(b,a)in E}mathbf{v}(b,a)

qquad mathrm{for} ain V

]
The matrix of this linear map is minus the transposed incidence matrix of the

graph $-B^T$.

Discover that the identification $Delta=mathrm{div grad}$ is trivial from the

definitions above, since each side are precisely $sum_{(a,b)in E}u(b)-u(a)$.

Thus, $L=-B^TB$.

### 3.4. Graph curl

We don’t want curls for our software, however allow us to say some phrases about

them.

These graph-theoretic analogues are simpler to grasp once we use

differential geometry as an alternative of plain vector calculus. In that case, the

discrete analogue of $okay$-forms are capabilities outlined over the $okay$-cliques of the

graph. Then the outside spinoff is instantly constructed for all values of $okay$,

and it accommodates the gradient, curl and divergence as explicit instances.

The particularity of third-dimensional manifolds comes from the truth that in

that in that case 1-forms and 2-forms have

the identical dimension and might each be interpreted as “vector fields”, thus the

curl operator is outlined from the outside spinoff

$d:Omega^1toOmega^2$. Within the case of graphs, we can’t basically determine capabilities

outlined on edges to capabilities outlined on triangles, besides in a single explicit

case: when the graph is a triangulation. In that case, there’s a

building that permits to outline the curl of a vector discipline as a vector

discipline, by traversing the 2 triangles at either side of an edge. The identification

$mathrm{curl grad}=0$ is then the sum of 6 values that cancel pairwise, and

so on. See the attractive papers of Oliver Knill for a complete protection

of this.

### 3.5. Graph subsets and their boundaries

It’s typically essential to cope with subset of graphs (for instance, once we

wish to interpolate a operate which is understood solely over some vertices).

In an effort to do algebra with them, we mannequin subsets as diagonal operators that

include the indicator operate of the subset because the diagonal entries. This

mannequin is used for subsets of vertices and subsets of edges.

**Notations**:

Let $X={1,ldots,n}$ (or any finite ordered set) and $Ysubseteq X$.

Let $a$ be a vector of size $n$ and $A$ a

matrix of dimension $ntimes n$ . We use the next, considerably ambiguous,

abuses of notation:

$mathrm{diag}(A)inR^n$:

the vector with the weather on the diagonal of $A$

$mathrm{diag}(a)inR^{ntimes n}$:

the diagonal matrix whose diagonal is $a$.

$1_YinR^{n}$: the indicator vector of the subset $Y$

$Y=mathrm{diag}(1_Y)inR^{ntimes n}$: the diagonal operator of $Y$

This final notation may be very handy in picture processing, as a result of it

represents point-wise multiplication by a binary picture as a linear operator

(with the identical identify because the binary picture). The $mathrm{diag}$ operator has

the identical semantics as that of octave/matlab.

Let $G=(V,E)$ be a graph with $n$ vertices and $m$ edges, and let

$Omegasubseteq V$.

To keep away from introducing new letters, we denote additionally by

$Omega=omega_{ij}$ the $ntimes n$ matrix that accommodates the indicator

operate of this set in its diagonal: $w_{ii}=1$ if $iin V$ and $w_{jj}=0$

in any other case.

Discover that the matrix $I-Omega$ corresponds to the

complementary set $Omega^c$.

We outline the **boundary** of a subset of vertices $Omegasubseteq V$ as

the subset of edges $partialOmegasubseteq E$ that go from $Omega$ to

$Omega^c$. Discover that $partialOmega=Ecap(OmegatimesOmega)$ in set

notation. Since $partialOmega$ is a subset of edges, it corresponds to a

diagonal matrix, additionally named $partialOmega$, of dimension $mtimes m$.

In matrix notation we have now

[partialOmega=mathrm{diag}(Bmathrm{diag}(Omega))] the place $B$ is the

incidence matrix of the graph. We will additionally write

$displaystyle1_{partialOmega}=B1_Omega$.

### 3.6. Equations on graphs

Now that we have now described the differential and boundary operators operator

in matrix type, it’s speedy to put in writing the discrete analogues of a number of

linear PDE. That is very lovely as a result of the analytic properties of the

corresponding PDE are recovered by elementary linear algebra.

**3.6.1.**

**Laplace** equation on the entire graph:

[

Lu=0

]
If the graph is related, the matrix $L$ has rank $n-1$ thus its kernel is

one-dimensional, equivalent to the fixed options $u=c$.

**3.6.2.**

**Poisson** equation on the entire graph, with information $f:VtoR$:

[

Lu=f

]
has a novel resolution except $f$ is fixed.

**3.6.3.**

Laplace equation on a subset $Omegasubseteq V$, with **Dirichlet
boundary situations**

$f:Omega^ctoR$:

[

Omega Lu + (I-Omega)(u-f)=0

] Discover that that is written as an $ntimes n$ linear system, nevertheless it has a

diagonal half equivalent to the values of $u$ outdoors of $Omega$.

Discover additionally that the values of $f$ on the vertices that don’t have any neighbors in

$Omega$ solely seem within the diagonal half. The values of $f$ inside $Omega$

don’t seem in any respect (are cancelled out).

**3.6.4.**

Laplace equation on a subset $Omegasubseteq V$,

with **Neumann boundary situations**

$g:partialOmegatoR$:

[

Omega Lu + (partialOmega)(nabla u – g)=0

]
Or equivalently, by creating the boundary and gradient operators,

[

left[Omega L + mathrm{diag}(Bmathrm{diag}(Omega))Bright]u =mathrm{diag}(Bmathrm{diag}(Omega)) g

]
or, in another notation

[

(mathrm{diag}(1_Omega) L + mathrm{diag}(B1_Omega))B)u

=mathrm{diag}(B1_Omega) g

]

**3.6.5.**

**Warmth equation** on the entire graph with preliminary situation $u_0:VtoR$:

[

begin{cases}

u_t & =Lu

u(0) & = u_0

end{cases}

]
It is a system of $n$ first-order linear differential equations with

fixed coefficients. It has a closed-form resolution utilizing the matrix

exponential $u=e^{tL}u_0$.

**3.6.6.**

Warmth equation with **supply time period** $f:VtoR$ and preliminary situation

$u_0:VtoR$

[

begin{cases}

u_t & =Lu+f

u(0) & = u_0

end{cases}

]
It has likewise a closed-form resolution $u=e^{tL}(u_0-L^{-1}f)-L^{-1}f$.

Discover that $L^{-1}f$ solely is smart when $f$ will not be a relentless vector.

**3.6.7.**

Different mixtures are attainable, and simple to infer from the less complicated instances:

Poisson and Warmth equation on subsets with varied boundary situations, and so on.

### 3.7. Riemannian graph geometry

The **isotropic** case of “anisotropic” diffusion in picture processing is

modelled by phrases of the shape $gDelta u$, the place $g$ is a positive-real valued

operate on $Omega$. Within the case of graphs, the operate $g$ corresponds to

a scalar discipline $g:VtoR$, which we affiliate to a diagonal $ntimes n$

matrix $tilde g$ with the values of $g$. Then these phrases turn out to be $tilde gL

u$ within the corresponding discrete mannequin.

Really **anisotropic** diffusion comes from phrases of the shape

$mathrm{div}(Dnabla u)$, the place the diffusivity $D$ is a discipline of

positive-definite symmetric matrices outlined over $Omega$. Within the case of

graphs, we have now a matrix $tilde D$, which can be diagonal, however now of

dimension $mtimes m$. Then these phrases turn out to be $mathrm{div}(Dnabla u)$ within the

discrete mannequin. Or, in matrix type, $B^TDBu$.

### 3.8. Algebraic graph integral calculus

Integral calculus will be generalized readily to graphs. Integrals are

changed by sums over a finite area, and the varied identities of integral

calculus (e.g., the divergence theorem) turn out to be speedy matrix identities.

Let $G=(V,E)$ be a graph with $V={1,ldots,n}$ and $E={1,ldots,m}$

Let $Omegasubseteq V$ and let $f:VtoR$ be a scalar discipline.

The **integral** of $f$ over $Omega$ is outlined as

[

int_Omega f=sum_{pin Omega}f(p)

]
in matrix notation we have now

(

int_Omega f := mathrm{sum}(Omega f).

)

Discover that right here $f$ is a vector of size $n$, $Omega$ is an $ntimes n$

matrix, and we’re computing the sum of all of the elements of the vector

$Omega f$ to acquire a single quantity.

Discover that $f$ should be outlined all over the place, however solely the values inside

$Omega$ are used; thus, we may have outlined $f$ solely inside $Omega$.

An **interface** inside a graph is outlined as a set of edges $Ssubseteq

E$. Given a vector discipline $mathbf{v}:EtoR$ we outline the **circulation**

of $mathbf{v}$ by way of $S$

[

int_S mathbf{vcdot ds} := sum_{ein S}mathbf{v}(e)

]
or, in matrix notation, $int_S mathbf{vcdot ds}=mathrm{sum}(tilde S

mathbf{v})$ the place $tilde S$ is the diagonal matrix containing the indicator

operate of $S$.

An attention-grabbing explicit case occurs when $S$ is the boundary of some

area $Omega$. We’ve seen above that the matrix $tilde S$ is then

equal to $mathrm{diag}(Bmathrm{diag}(Omega))$. This statement results in the **graph divergence
theorem** that claims that

[

int_{partialOmega} mathbf{vcdot ds} =int_Omegamathrm{div}(mathbf{v})

] or, in matrix notation,

[

1_Omegacdot(B^Tmathbf{v})

=

(B1_Omega)cdotmathbf{v}

] which is strictly the identical factor, written in a different way.