jhljx / CTGCN

CTGCN: k-core based Temporal Graph Convolutional Network for Dynamic Graphs (accepted by IEEE TKDE in 2020) https://ieeexplore.ieee.org/document/9240056

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dimensions do not match in VGRNN

AlessandroFazio opened this issue · comments

Hello, @jhljx. First of all I want to thank you for the great work you put in this OS project.

Trying the VGRNN code found an error at line 497 in baseline/VGRNN.py. When concatenating phi_x_t and h[-1] there is a mismatch: phi_x_t is of size (num_nodes x hidden) and h[-1] is size (input_features x hidden_dim), so that joining them on dim=1 results in an error, unless you have input_features == num_nodes, which is the case when you're very lucky or do not have node_features and they're initialized as an Identity of size num_nodes x num_nodes, which is the base case you support.

Have you considered this case and I'm doing something wrong or you didn't provided support for this? Thanks in advance.

Hello @jhljx , looking deeper into the code I think to have found the issue. I have not tried yet with these new changes, but this is likely the source of error.

I looked at the original code in VGRNN repository provided by the authors. In their VGRNN forward method they pass x_in, edge_list and the initial_hidden_state(=None) as you did. However if you look at the prediction.py code they pass the x_list into torch.stack(.), which outputs a 3d tensor of size (T, num_nodes, input_features). Then they initialize h as a 3d tensor of size (rnn_num_layers, x.size(1), hidden_dim). Here's the point: x.size(1) is num_nodes, which is the reason why h[.], where . is a scalar outputs a (num_nodes, hidden_dim) 2d tensor.

However in your code you initialized h as of size (rnn_num_layers, input_features, hidden_dim). When input_features == num_nodes like in the no nodes features mode, everythings works fine and this go unnoticed.

I will let you know if this effectively fix the issue as soon as I can try the new code. Hope to help improve the code.

commented

This bug has fixed. Thanks for your feedback.