Ask A Question
Williamdayu opened this issue · comments
Learn a lot! I met this question.
It is that if I need to divide train.csv, valid.csv,and test.csv by myself.
I see that original dataset does not have following folders.
FileNotFoundError: [Errno 2] File /data/uci/lp_data_0/2004-05_train.csv does not exist: '/data/uci/lp_data_0/2004-05_train.csv'
Thank you
Learn a lot! I met this question.
It is that if I need to divide train.csv, valid.csv,and test.csv by myself.
I see that original dataset does not have following folders.FileNotFoundError: [Errno 2] File /data/uci/lp_data_0/2004-05_train.csv does not exist: '/data/uci/lp_data_0/2004-05_train.csv'
Thank you
As far, I can see these folders
0.input 1.format nodes_set
Hello there,
I think you can run the following command to generate the training data for link prediction task.
python3 main.py --config=config/uci.json --task=link_pred
Hope this helps!
Still have this question.
I have run the following command
python3 main.py --config=config/uci.json --task=link_pred
root@ss:/home/ss/CTGCN# python3 main.py --config=config/uci.json --task=link_pred
args: Namespace(config=['config/uci.json'], method=None, task='link_pred')
Start link prediction!
Current method is :CTGCN-C
method = CTGCN-C
Traceback (most recent call last):
File "main.py", line 129, in
main(sys.argv)
File "main.py", line 111, in main
link_prediction_task(args_dict)
File "main.py", line 65, in link_prediction_task
link_prediction(args)
File "/home/ss/CTGCN/evaluation/link_prediction.py", line 332, in link_prediction
link_predictor.link_prediction_all_method(method_list=method_list, worker=worker)
File "/home/ss/CTGCN/evaluation/link_prediction.py", line 254, in link_prediction_all_method
self.link_prediction_all_time(method)
File "/home/ss/CTGCN/evaluation/link_prediction.py", line 219, in link_prediction_all_time
train_edges = pd.read_csv(os.path.join(self.lp_edge_base_path, date + '_train.csv'), sep=self.file_sep).values
File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 448, in _read
parser = TextFileReader(fp_or_buf, **kwds)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 880, in init
self._make_engine(self.engine)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 1114, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py", line 1891, in init
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 374, in pandas._libs.parsers.TextReader.cinit
File "pandas/_libs/parsers.pyx", line 674, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: [Errno 2] File /data/uci/lp_data_0/2004-05_train.csv does not exist: '/data/uci/lp_data_0/2004-05_train.csv'
So sorry for delay of response.
I think you need to modify the configuration file here. You can modify the parameter of line 879. The parameter is "generate", which means whether to generate the training/test data for each task. You can modify this parameter to 'true'.
And if you want to evaluate the GNN methods on link prediction task or other tasks. You need to first generate node embeddings for each method, using the commond shown in the README.md of https://github.com/jhljx/CTGCN.
python3 main.py --config=config/uci.json --task=embedding --method=CTGCN-C
Moreover, you can check the descriptions of configuration parameters from the README.md of https://github.com/jhljx/CTGCN/tree/master/config.
When I first run the link prediction task, I modify the "generate" parameter to true, then the program will generate training data and test data for all compared graph embedding methods. After that, when I run the link prediction task agin, I modify the "generate" parameter to false. Then the program will not generate new data but use the previous generated training and test data.
Thank you! The question has been solved. And your work is very meaningful, looking forward to the formal publication of the paper.
Ok, I will close this issue.