English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

O'Bray*, L., Horn*, M., Rieck*, B., Borgwardt*, K., & (* = equal contribution) (2022). Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions. Tenth International Conference on Learning Representations (ICLR 2022). doi:10.48550/arXiv.2106.01098.


Cite as: https://hdl.handle.net/21.11116/0000-000C-EC5A-6
Abstract
Graph generative models are a highly active branch of machine learning. Given the steady development of new models of ever-increasing complexity, it is necessary to provide a principled way to evaluate and compare them. In this paper, we enumerate the desirable criteria for such a comparison metric and provide an overview of the status quo of graph generative model comparison in use today, which predominantly relies on the maximum mean discrepancy (MMD). We perform a systematic evaluation of MMD in the context of graph generative model comparison, highlighting some of the challenges and pitfalls researchers inadvertently may encounter. After conducting a thorough analysis of the behaviour of MMD on synthetically-generated perturbed graphs as well as on recently-proposed graph generative models, we are able to provide a suitable procedure to mitigate these challenges and pitfalls. We aggregate our findings into a list of practical recommendations for researchers to use when evaluating graph generative models.