asarigun / nfc

Implementation of Graph Node-Feature Convolution for Representation Learning in TensorFlow

Home Page:https://arxiv.org/abs/1812.00086

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Graph Node-Feature Convolution for Representation Learning in TensorFlow

Implementation of Graph Node-Feature Convolution for Representation Learning in TensorFlow

Graph Node-Feature Convolution for Representation Learning

Graph convolutional network (GCN) is an emerging neural network approach. It learns new representation of a node by aggregating feature vectors of all neighbors in the aggregation process without considering whether the neighbors or features are useful or not. Recent methods have improved solutions by sampling a fixed size set of neighbors, or assigning different weights to different neighbors in the aggregation process, but features within a feature vector are still treated equally in the aggregation process. In this paper, a new convolution operation is introduced on regular size feature maps constructed from features of a fixed node bandwidth via sampling to get the first-level node representation, which is then passed to a standard GCN to learn the second-level node representation. [1]

Li Zhang , Heda Song, Haiping Lu, 2018, Graph Node-Feature Convolution for Representation Learning

For official implementation, you can visit report

Requirements

  • tensorflow_version 1.x

Training

python train.py

You can also try out in colab if you don't have any requirements!

Note: Since random inits, your training results may not exact the same as reported in the paper!

Data

In order to use your own data, you have to provide

  • an N by N adjacency matrix (N is the number of nodes),
  • an N by D feature matrix (D is the number of features per node), and
  • an N by E binary label matrix (E is the number of classes). [2]

Have a look at the load_data() function in utils.py for an example.

In this example, we load citation network data (Cora, Citeseer or Pubmed). The original datasets can be found here: http://www.cs.umd.edu/~sen/lbc-proj/LBC.html. In our version (see data folder) we use dataset splits provided by https://github.com/kimiyoung/planetoid (Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov, Revisiting Semi-Supervised Learning with Graph Embeddings, ICML 2016).

You can specify a dataset by editing train.py

Reference

[1] Zhang, Song, Lu, Graph Node-Feature Convolution for Representation Learning, 2018 report

[2] Kipf & Welling, Semi-Supervised Classification with Graph Convolutional Networks, 2016 report

Citation

@article{zhang2018graph,
  title={Graph node-feature convolution for representation learning},
  author={Zhang, Li and Song, Heda and Lu, Haiping},
  journal={arXiv preprint arXiv:1812.00086},
  year={2018}
}
@article{kipf2016semi,
  title={Semi-supervised classification with graph convolutional networks},
  author={Kipf, Thomas N and Welling, Max},
  journal={arXiv preprint arXiv:1609.02907},
  year={2016}
}

About

Implementation of Graph Node-Feature Convolution for Representation Learning in TensorFlow

https://arxiv.org/abs/1812.00086

License:MIT License


Languages

Language:Python 100.0%