graphdeeplearning / graphtransformer

Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.

Home Page:https://arxiv.org/abs/2012.09699

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Scaling of Laplacian pre-computation

JellePiepenbrock opened this issue · comments

First, I would like to say that I think there are some very good ideas in the paper. Nice work! I have some questions though:

Could you tell me what the largest graph is that you've used this approach on? Do you have any recommendations for Laplacian eigenvector encodings for large graphs? The way it's implemented now, using np.linalg.eig and the .to_array() call, which seems to lose the sparsity, could give some problems.

Hi @JellePiepenbrock,
The largest graph size (i.e. number of nodes in a single graph) that is considered in our experiments is 190 (for SBM).
Thank you for raising the issue of scaling of pre-computation -- something we shall focus in future works.