ramanarayan86 / awesome-graph-transformer

Papers about graph transformers.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

awesome-graph-transformer

PRs Welcome Awesome

This repository contains a list of papers on the Graph Transformers; we categorize them based on their detailed techniques.

We will try to make this list updated. If you found any error or any missed paper, please don't hesitate to open an issue or pull request.

Structural Encoding / Postional Encoding for Graph Transformers

Spectral Positional Encoding

  1. Rethinking Graph Transformers with Spectral Attention. NeurIPS 2021. [paper]
  2. A Generalization of Transformer Networks to Graphs. AAAI workshop 2021. [paper]

Other Structure-aware Encoding

  1. Do Transformers Really Perform Bad for Graph Representation? NeurIPS 2021. [paper]
  2. Graph Neural Networks with Learnable Structural and Positional Representations. ICLR 2022. [paper]
  3. GRPE: Relative Positional Encoding for Graph Transformer. ICLR 2022 Workshop MLDD [paper]
  4. Global Self-Attention as a Replacement for Graph Convolution. KDD 2022. [paper]

Graph Neural Network as Structural Encoder

  1. GraphiT: Encoding Graph Structure in Transformers. Arxiv 2021. [paper]
  2. Structure-Aware Transformer for Graph Representation Learning. ICML 2022. [paper]
  3. Recipe for a General, Powerful, Scalable Graph Transformer. Arxiv 2022. [paper]

Scalability of Graph Transformers (Graph Transformers on Large Graphs)

Transformers with Sampling

  1. A Self-Attention Network based Node Embedding Model. ECML-PKDD 2020. [paper]
  2. Heterogeneous Graph Transformer. WWW 2020. [paper]
  3. Gophormer: Ego-Graph Transformer for Node Classification. Arxiv 2021. [paper]
  4. Coarformer: Transformer for large graph via graph coarsening. Openreview 2021. [paper]

Transformers with Adapted Attention

  1. From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers. Arxiv 2022. [paper]
  2. Recipe for a General, Powerful, Scalable Graph Transformer. Arxiv 2022. [paper]
  3. Deformable Graph Transformer. Arxiv 2022. [paper]

Applications of Graph Transformers (Molecules, Texts)

  1. Modeling Graph Structure in Transformer for Better AMR-to-Text Generation. EMNLP 2019. [paper]
  2. Heterogeneous Graph Transformer for Graph-to-Sequence Learning. ACL 2020. [paper]
  3. Molecule Attention Transformer. Arxiv 2020. [paper]
  4. Interpretable Rumor Detection in Microblogs by Attending to User Interactions. AAAI 2020. [paper]
  5. Graph Transformer for Graph-to-Sequence Learning. AAAI 2020. [paper]
  6. Self-Supervised Graph Transformer on Large-Scale Molecular Data. NeurIPS 2020. [paper]
  7. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. NeurIPS 2020. [paper]
  8. Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs. TextGraphs 2021. [paper]
  9. GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph. NeurIPS 2021. [paper]
  10. Systematic Generalization with Edge Transformers. NeurIPS 2021. [paper]
  11. Mesh Graphormer. ICCV 2021. [paper]
  12. Relative Molecule Self-Attention Transformer. Arxiv 2021. [paper]
  13. Neighbour Interaction based Click-Through Rate Prediction via Graph-masked Transformer. Arxiv 2022. [paper]
  14. Equivariant Transformers for Neural Network based Molecular Potentials. ICLR 2022. [paper]

Pre-training with Graph Transformers

  1. Selfsupervised graph transformer on large-scale molecular data. NeurIPS 2020. [paper]
  2. Graph-Bert: Only Attention is Needed for Learning Graph Representations. Arxiv 2020. [paper]
  3. Graph Masked Autoencoders with Transformers. Arxiv 2022. [paper]

Survey

  1. Transformer for Graphs: An Overview from Architecture Perspective. Arxiv 2022. [paper]

Uncategorized

  1. Transformers Generalize DeepSets and Can be Extended to Graphs & Hypergraphs. NeurIPS 2021. [paper]
  2. Representing Long-Range Context for Graph Neural Networks with Global Attention. NeurIPS 2021. [paper]
  3. Universal Graph Transformer Self-Attention Networks. WWW 2022. [paper]
  4. Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification. IJCAI 2021. [paper]

About

Papers about graph transformers.