• Aishwarya Pothula

Large Scale Learnable Graph Convolutional Networks

Convolutional neural networks have achieved great success on grid-like data such as image and text data. However, Convolutional Networks face challenges in learning from generic data such as graphs. In most real-world applications such as social, citation, and biological networks, the data can be naturally represented as graphs. However, till now, grid-like convolution operations have not been possible on graph data mainly because of two challenges.


In CNN, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the reception fields. The number of neighbouring units is neither fixed nor are they ordered in generic graphs. As a solution, the paper aims to enable grid-like convolution operation through a Learnable Graph Convolutional Layer (LGCL). This works by selecting a fixed number, k largest nodes, of neighbourhood and ranking in order to be able to represent the graph data into a grid-like data structure in 1-D format. This way, convolution operations can be performed on graph data.





During training, the inputs to the network are the feature vectors of all the nodes along with the adjacency matrix of the whole graph, whose sizes become large for large graph data. The training challenge is that these prior models work properly only on small-scale graphs. To address this issue, the paper proposes a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions.


The proposed model has been applied to the task of node classification on the Cora, Citeseer, Pubmed citation network and protein-protein interaction network datasets. The experiments have been performed in both transductive and inductive settings. Under the transductive setting, unlabeled testing data is available during training so that the model is aware of the graph structure that contains the testing nodes. In contrast, under inductive learning, testing nodes are not available during the training phase; there are separate training, validation and testing graphs. The protein-protein interaction dataset is used for inductive learning whereas the remaining datasets are used for transductive learning experiments.


The results of the experiments indicate that the proposed method using subgraph training is consistently far more efficient than prior methods in both inductive and transductive settings. Additionally, it is also noted that the proposed subgraph training strategy brings substantial improvement in terms of training speed with only a negligible associated performance loss. Future work can be directed toward applying the approach to data of different modalities.


2 views0 comments

Recent Posts

See All

A few weeks ago, I have started to write my first paper. In this blog, I plan to periodically share my experiences of academic writing. Even for someone accustomed to writing of some form every day, I