CRIPAC-DIG / GCA

[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about the evaluation.

nnnnnzy opened this issue · comments

What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.

I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?

Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.