BUPT-GAMMA / OpenHGNN

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [14328, 334]] is at version 1; expected version 0 instead

wwddd66 opened this issue · comments

commented

When I run the model HGSL, the acm4GTN dataset can work normally. When I switch to the dblp4GTN dataset or the imdb4GTN dataset, this error occurs:(the following is dblp4GTN)

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [14328, 334]] is at version 1; expected version 0 instead.

And I set the undirected_relations = author-paper,paper-conference for dataset dblp4GTN in config.ini file. I just can find the tensor shape [14328,334] is node type “paper” in variable h_dict, but I don’t know why it doesn’t work? Or could u add the datasets “dblp4GTN” and “imdb4GTN” for HGSL model? And what’s the differences between “acm4GTN” and “[others]4GTN”?

@lazishu2000 Please review and address this issue. Relevant discussions: #137

commented

@lazishu2000 Please review and address this issue. Relevant discussions: #137

Thanks for your reply! In the HGSL model, I even set the variable mp_ emb_ Dim=0 to skip semantic level modules, avoiding the datasets dblp4GTN and imdb4GTN without initial meta-path embeddings from metapth2vec model. But the same error will still occur. So I think there is a greater possibility of problems with the datasets, please fix it as soon as possible.

The difference is that acm4GTN has metapath-embedding which you can find in:
hg.nodes['paper'].data/hg.nodes['subject'].data/hg.nodes['author'].data
and dblp4GTN does not have it. When you simply set variable mp_emb_Dim=0 seems to cause other problems with model operation.
For now this model only support acm4GTN for training. If you have any other questions please feel free to contact us.

commented

The difference is that acm4GTN has metapath-embedding which you can find in:
hg.nodes['paper'].data/hg.nodes['subject'].data/hg.nodes['author'].data
and dblp4GTN does not have it. When you simply set variable mp_emb_Dim=0 seems to cause other problems with model operation.
For now this model only support acm4GTN for training. If you have any other questions please feel free to contact us.

Thanks for your reply! First, I’m not simply set variable mp_emb_Dim=0. What I mean is The relevant code which needs to use metapath embeddings has been annotated. And the purpose is to eliminate the difference between datasets you have told me—Whether it has metapath embedding or not. SO What should I do if I want to use other datasets in this model? There are three datasets which are acm, dblp and yelp in HGSL paper. So I think the model should support other datasets besides acm. Could you give me some solutions about this?

Thanks for your question! According to your requirements, we are going to add dblp and yelp datasets for HGSL training, considering it as an extension of HGSL model.
we will make it work as soon as possible, but the extension won't be our first priority. If you want to get results faster , adding embeddings for metapath as we mentioned in #137 is a possible solution.

We support dblp4GTN and yelp4HGSL for HGSL model now !
Relevant commits :
1b31a37
eec0bdf
For your reference ~

commented

We support dblp4GTN and yelp4HGSL for HGSL model now ! Relevant commits : 1b31a37 eec0bdf For your reference ~

Thanks! The results is almost same as my constructed datasets from the HGSL source code.