English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Structure-Aware Transformer for Graph Representation Learning

Chen, D., O’Bray, L., & Borgwardt, K. (2022). Structure-Aware Transformer for Graph Representation Learning. Proceedings of the 39th International Conference on Machine Learning (ICML 2022), 162, 3469-3489.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:
Not specified
Description:
ICML Presentation
OA-Status:
Not specified

Creators

show
hide
 Creators:
Chen, Dexiong, Author
O’Bray, Leslie, Author
Borgwardt, Karsten1, Author                 
Affiliations:
1ETH Zürich, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: The Transformer architecture has gained growing attention in graph representation learning recently, as it naturally overcomes several limitations of graph neural networks (GNNs) by avoiding their strict structural inductive biases and instead only encoding the graph structure via positional encoding. Here, we show that the node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them. To address this issue, we propose the Structure-Aware Transformer, a class of simple and flexible graph Transformers built upon a new self-attention mechanism. This new self-attention incorporates structural information into the original self-attention by extracting a subgraph representation rooted at each node before computing the attention. We propose several methods for automatically generating the subgraph representation and show theoretically that the resulting representations are at least as expressive as the subgraph representations. Empirically, our method achieves state-of-the-art performance on five graph prediction benchmarks. Our structure-aware framework can leverage any existing GNN to extract the subgraph representation, and we show that it systematically improves performance relative to the base GNN model, successfully combining the advantages of GNNs and Transformers. Our code is available at https://github.com/BorgwardtLab/SAT.

Details

show
hide
Language(s):
 Dates: 2022-06-282022
 Publication Status: Issued
 Pages: 3469-3489
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 39th International Conference on Machine Learning (ICML 2022)
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 162 Sequence Number: - Start / End Page: 3469 - 3489 Identifier: -