ignacio-rocco / ncnet

PyTorch code for Neighbourhood Consensus Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The PCK on pf-pascal dataset is 75.35

bunKiatIunn opened this issue · comments

Run python train.py --ncons_kernel_sizes 5 5 5 --ncons_channels 16 16 1 --dataset_image_path datasets/pf-pascal --dataset_csv_path datasets/pf-pascal/image_pairs/

The PCK on pf-pascal dataset is 75.35 (78.9 in paper).
Is there some other important hyperparameters? Thank you.

Could you share your env setting? PyTorch version? system version?

  • First, there exist two phases in the training phase described in the original paper.
    • Training stage
    • Finetune stage

I got similar results with yours in stage 1. But I can only get 76.50% after the finetune stage. Because of the lack of the hyperparameter of stage 2, I have finetuned all blocks of the last residual layer as the paper present.

@ignacio-rocco Would you like to share the more training details with us. Thanks a lot!

  • Download the checkpoints provided by the author.
  • Loading the checkpoint and we can obtain the training setting details, e.g.,

'epoch': epoch, 'args': args, 'state_dict': model.state_dict(), 'best_test_loss': best_test_loss, 'optimizer': optimizer.state_dict(), 'train_loss': train_loss, 'test_loss': test_loss,

According to checkpoint['args'], the fe_finetune_params is set to 1.
The PCK on pf_pascal I got is 77.63 (78.9 in paper).

I use the multi-gpu(2-gpus), set batch size =32, get 77.3% for the first stage.
I found the experiment results are random, performance range from(74.8 ~ 77.3).
I plan to add the following to reduce this noise:

    torch.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    random.seed(seed)
    np.random.seed(seed)

    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

I will report the performance and the log later.