Implementation for ACML 2021 paper A Two-Stage Training Framework with Feature-Label Matching Mechanism for Learning from Label Proportions
We use the code implemented by HobbitLong to train the feature extractor.
Part of the code is borrowed from kuangliu. Thank him for sharing the code.
- Download the pretrained feature extractor weight from here
(100_512_1000.pth: CIFAR100, batch_size 512, training epochs 1000, 10_512_1000: CIFAR10)
Or you can train it by your own following the guidence in SupContrast
and make sure you take the SimCLR method instead of Supervised Contrastive learning method. python main.py --learning_rate 1 --packet_size 128 --ckpt pretrain_feature_extractor --dataset cifar100 --thre 0.01 --thre2 0.01 --acc_save_path acc.pkl --epochs 200
- As for
Ours without FLMm
, you can just comment out the following code in functiontrain()
inmain.py
to get a similar results.
nll_loss = nn.NLLLoss()
for r in range(int(1024/opt.packet_size)):
if not threshold_label[r] or not batch_features[r]:
continue
for lal in threshold_label[r]:
curt_bag_feature = batch_features[r]
f = torch.stack(curt_bag_feature).cuda()
tar = torch.tensor([lal for kkk in range(f.shape[0])],dtype=torch.long).cuda()
output = classifier(f.detach()) # [1024,10]
outputs = F.log_softmax(output,dim=-1)
nll = 0.0005*nll_loss(outputs,tar)
optimizer.zero_grad()
nll.backward()
optimizer.step()