English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Can graph neural networks go „online“? An analysis of pretraining and inference

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Galke, L., Vagliano, I., & Scherp, A. (2019). Can graph neural networks go „online“? An analysis of pretraining and inference. In Proceedings of the Representation Learning on Graphs and Manifolds: ICLR2019 Workshop.


Cite as: https://hdl.handle.net/21.11116/0000-0009-F97C-4
Abstract
Large-scale graph data in real-world applications is often not static but dynamic,
i. e., new nodes and edges appear over time. Current graph convolution approaches
are promising, especially, when all the graph’s nodes and edges are available dur-
ing training. When unseen nodes and edges are inserted after training, it is not
yet evaluated whether up-training or re-training from scratch is preferable. We
construct an experimental setup, in which we insert previously unseen nodes and
edges after training and conduct a limited amount of inference epochs. In this
setup, we compare adapting pretrained graph neural networks against retraining
from scratch. Our results show that pretrained models yield high accuracy scores
on the unseen nodes and that pretraining is preferable over retraining from scratch.
Our experiments represent a first step to evaluate and develop truly online variants
of graph neural networks.