An Introduction to Graph Convolutions from Scratch: Understanding the Function of Graph Neural Networks (GNN)

Introduction to graph neural networks and graph convolutions:

Graphs are a versatile and effective way of representing data with inherent structure. This tutorial will provide an overview of graph neural networks and graph convolutions, breaking down these concepts for beginners in the field.

Transition from images to graphs:

Images are highly structured data, with pixels forming a meaningful arrangement. Changing the pixel structure fundamentally alters the image’s meaning, and images also exhibit a strong concept of locality.

Structure and signals in graphs:

The structure and signal in an image can be regarded as the grid-like arrangement of pixels and their varying intensities, respectively. However, graph neural networks can decompose this structure and signal into powerful and versatile methods that can be used to model a wide range of data, including functional brain graphs, social networks, point clouds, and molecules and proteins.

Decomposing features (signal) and structure:

Just as in Transformers, natural language can be decomposed into signal and structure, graphs also require these two representations in order to be effectively modeled. Formally, the words or pixels in a graph are seen as nodes, represented as N, with their connectivity and structure defined by an adjacency matrix (A). The signal for each node is represented as X.

Real-world signals that we can model with graphs:

If we define the representations, we can model anything we want with graphs. For example, brain graphs from functional medical imaging, social networks, point clouds, and even molecules and proteins can be modeled with the right representations and connectivity structures.

The basic maths for processing graph-structured data:

The degree of each node is a key and practical feature. The degree of each node corresponds to the number of nodes to which it is connected. This degree matrix is fundamental in graph theory, and is also used for the computation of the graph Laplacian.

The graph Laplacian:

The graph Laplacian is defined to have specific elements based on both the degree of the nodes and their connectivity. In practice, there are several variations of the graph Laplacian that can be used in graph neural networks. The basic concept of using the graph Laplacian is the same.

The background theory of spectral graph convolutional networks:

The spectral graph convolutions are about computing the sums of different derivatives at a single point, approximating a matrix with its properties. This expansion uses a matrix to estimate the matrix in any given power K, avoiding K square matrix multiplications.

How graph convolutions layer are formed:

Basic graph convolutions are based on the idea of doing convolutions in the vertex domain, which is equivalent to multiplication in the graph spectral domain. Instead of the traditional binary matrices, the normalized Laplacian is often used in modern implementations of graph convolutions.

Implementing a 1-hop GCN layer in Pytorch:

We used a simple 1-hop GCN layer in a small graph dataset to demonstrate the effectiveness of graph neural networks for such data. Our implementation achieved good results, but with some hyperparameter tuning and other improvements, the performance could potentially be much better.

Practical issues when dealing with graphs:

The varying number of nodes in graph data can make batching graph data difficult. One solution involves the use of block-diagonal matrices, which gives us a bigger graph with a batch size and non-connected components, essentially allowing us to batch graph data more efficiently.

Conclusion:

The tutorial provides a deep introduction to graphs for individuals new to the field, offering a range of perspectives and illustrative examples. To further explore and deepen your understanding of graph neural networks, consider working with Pytorch Geometric.

Deep Learning in Production Book 📖:

The tutorial concludes with an invitation for readers to learn more about building, training, deploying, and maintaining deep learning models using the “Deep Learning in Production” book.

Latest articles

Related articles