cdancette / rubi.bootstrap.pytorch

NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to run the Compare(Evaluation) script.

jokieleung opened this issue · comments

commented

Hi, I have trained the baseline and Rubi model in the VQA-CP V2 dataset. And when I wanted to run the python -m rubi.compare_vqa2_rubi_val -d logs/vqa2/rubi logs/vqa2/baseline ,I got the Error:

FileNotFoundError: [Errno 2] No such file or directory: 'logs/vqacp2/rubi/logs_val_oe.json'

And then I checked the log dir of the Rubi and the baseline net, I couldn't find the JSON log file for 'open-ended' evaluation, and the log.txt was like:

[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_rubi_val_oe' not found in 'logs/vqacp2/rubi/logs_rubi_val_oe.json'
[I 2020-02-05 16:08:34] ...trap/engines/engine.py.252: Evaluating model on valset for epoch 0
[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_train_oe' not found in 'logs/vqacp2/rubi/logs_train_oe.json'
[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_q_train_oe' not found in 'logs/vqacp2/rubi/logs_q_train_oe.json'
[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_rubi_train_oe' not found in 'logs/vqacp2/rubi/logs_rubi_train_oe.json'
[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_q_val_oe' not found in 'logs/vqacp2/rubi/logs_q_val_oe.json'
[W 2020-02-05 16:08:34] ...tstrap/views/plotly.py.76: Json log file 'logs_val_oe' not found in 'logs/vqacp2/rubi/logs_val_oe.json'

I would like to know how to generate these log files for evaluating the 'open-ended' accuracy.
Thank you very much.

commented

@jokieleung

The open ended calculation relies on this external library: https://github.com/Cadene/block.bootstrap.pytorch/tree/master/block/external/VQA

block is supposed to be downloaded within the requirements: https://github.com/cdancette/rubi.bootstrap.pytorch/blob/master/requirements.txt#L1

The parallelized calculation of the open ended accuracy is supposed to be triggered at the end of each epoch:
https://github.com/Cadene/block.bootstrap.pytorch/blob/master/block/models/metrics/vqa_accuracies.py#L174

If it is not, you can run the command by hand. First set rm to 0 in order to save the result files:
https://github.com/Cadene/block.bootstrap.pytorch/blob/master/block/models/metrics/compute_oe_accuracy.py#L77

Second run the command displayed in the logs:
https://github.com/Cadene/block.bootstrap.pytorch/blob/master/block/models/metrics/vqa_accuracies.py#L177

It should be something like:

cd $CODE/rubi.bootstrap.pytorch
python -m block.models.metrics.compute_oe_accuracy ARGS

In a parallel universe, we took the time to code this properly.

commented

A quick fix:

cd $CODE/rubi.bootstrap.pytorch
cd ..
pip uninstall block.bootstrap.pytorch
git clone --recursive https://github.com/Cadene/block.bootstrap.pytorch.git
ln -s block.bootstrap.pytorch/block rubi.bootstrap.pytorch/
commented

A quick fix:

cd $CODE/rubi.bootstrap.pytorch
cd ..
pip uninstall block.bootstrap.pytorch
git clone --recursive https://github.com/Cadene/block.bootstrap.pytorch.git
ln -s block.bootstrap.pytorch/block rubi.bootstrap.pytorch/

Ok , I got it. Thank you very much.