Recurrent Graph Neural Networks
Giovanni Pellegrini1,2,3
SML1 Lab, University of Trento, Italy
TIM2
EIT DIGITAL3
1
01 GNNs so far
xA
xB
xE
Neural Networks
Permutation invariant
Aggregation
Sum
Average
Max
2
02 Recurrent VS Convolutional
3
02 Recurrent VS Convolutional
4
02 Recurrent VS Convolutional
5
02 Recurrent VS Convolutional
6
02 Recurrent VS Convolutional
7
02 Recurrent VS Convolutional
8
GNNs so far 01
02
Recurrent VS
Convolutional
TABLE OF 04 Gated Graph NN
The Graph Neural
Network Model 03 CONTENTS
05
Gated Graph Sequence
NN
06 PyG Tutorial
9
03 Graph Neural Network Model1
● Pioneer work on Graph Neural Network
● Diffusion mechanism (convolution)
● General framework for graph processing
1
Scarselli, et al., The graph neural network model, IEEE Transactions on Neural Networks, 2009
10
03 Graph Neural Network Model
11
03 Graph Neural Network Model
12
03 Graph Neural Network Model
13
03 Graph Neural Network Model
14
03 Graph Neural Network Model
15
03 Graph Neural Network Model
= edges connected to v
= neighbours of v
16
03 Graph Neural Network Model
= edges connected to v
= neighbours of v
17
03 Graph Neural Network Model
= edges connected to v
= neighbours of v
18
03 Graph Neural Network Model
= edges connected to v
= neighbours of v
learnable parameters
19
03 Graph Neural Network Model
= edges connected to v
= neighbours of v
learnable parameters
= transition function
= output function
20
03 Graph Neural Network Model
Repeated forward of the transition function create an
encoding network
Image taken from the original publication
21
03 Graph Neural Network Model
Goal: converge to a unique solution for xv and ov
22
03 Graph Neural Network Model
Goal: converge to a unique solution for xv and ov
If the transition function is a contraction mapping,
there exists a fixed point solution
23
03 Graph Neural Network Model
Goal: converge to a unique solution for xv and ov
If the transition function is a contraction mapping,
there exists a fixed point solution
If is a NN, to ensure the contraction mapping a
penalty based on the norm of the Jacobian is added to
the loss function
24
03 Graph Neural Network Model
25
03 Graph Neural Network Model
26
03 Graph Neural Network Model
27
03 Graph Neural Network Model
Almeida - Pineda
algorithm
28
04 Gated Graph Neural Network2
● Adaptation of the GNNM
● Uses GRU (Gated Recurrent Units) as transition function
● Iterate over T timesteps (instead of until convergence)
● Uses BTT (BackProp Through Time) to compute gradient
1
Li et al, Gated graph sequence neural networks, ICLR, 2015.
29
04 Gated Graph Neural Network
Node annotations: node embeddings with additional
information
Propagation Model:
30
04 Gated Graph Neural Network
Node annotations: node embeddings with additional
information
Node Node
state annotation Propagation Model:
31
04 Gated Graph Neural Network
Node annotations: node embeddings with additional
information
Node Node
state annotation Propagation Model:
32
05 Gated Graph Sequence Neural Network
What if we want to produce sequences of output values?
33
05 Gated Graph Sequence Neural Network
What if we want to produce sequences of output values?
GGSNN : multiple GGNNs operate in sequence to
produce
34
05 Gated Graph Sequence Neural Network
What if we want to produce sequences of output values?
GGSNN : multiple GGNNs operate in sequence to
produce
Computes X(k+1) from X(k)
Computes o(k) from X(k)
35
05 Gated Graph Sequence Neural Network
What if we want to produce sequences of output values?
GGSNN : multiple GGNNs operate in sequence to
produce
Computes X(k+1) from X(k)
Computes o(k) from X(k)
implements both the transition and output function!
36
05 Gated Graph Sequence Neural Network
k = length of the sequence
t = timesteps
37
06 PyG tutorial
Let’s go to the Jupyter notebook….
38