yitu-opensource / T2T-ViT

ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tuning

davidmoralesrodriguez opened this issue · comments

Hi, is it possible to fine-tune a model on a custom dataset using the pretrained imagenet models?

Of course, it can.
Simply, you can:

from timm.models import create_model

model = = create_model(
    args.model,
    pretrained=args.pretrained,
    num_classes=args.num_classes,
    drop_rate=args.drop,
    drop_connect_rate=args.drop_connect,  # DEPRECATED, use drop_path
    drop_path_rate=args.drop_path,
    drop_block_rate=args.drop_block,
    global_pool=args.gp,
    bn_tf=args.bn_tf,
    bn_momentum=args.bn_momentum,
    bn_eps=args.bn_eps,
    checkpoint_path=args.initial_checkpoint,
    img_size=args.img_size)

I will update the codes of how to finetune on other datasets like CIFAR10/100 recently.

We released the codes of finetune our model on CIFAR10/100 in 'transfer_learning.py'.
You can also use these lines to load our model and pretrained paramters:
line1
line2