microsoft / Semi-supervised-learning

A Unified Semi-Supervised Learning Codebase (NeurIPS'22)

Home Page:https://usb.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: The size of tensor a (1025) must match the size of tensor b (257) at non-singleton dimension 1

ArnabPurk opened this issue · comments

I am using the library to run Semi supervised models on my dataset
i used the custom dataset code given in google colab and converted my dataset images to csv
when i run the code to train the dataset i get RuntimeError: The size of tensor a (1025) must match the size of tensor b (257) at non-singleton dimension 1 error
i am giving the code below
please give me possible solutions
`config = {
'algorithm': 'fixmatch',
'net': 'vit_tiny_patch2_32',
'use_pretrain': True,
'pretrain_path': 'https://github.com/microsoft/Semi-supervised-learning/releases/download/v.0.0.0/vit_tiny_patch2_32_mlp_im_1k_32.pth',

# Optimization configs
'epoch': 50,
'num_train_iter': 5000,
'num_eval_iter': 1000,
'num_log_iter': 50,
'optim': 'AdamW',
'lr': 5e-4,
'layer_decay': 0.5,
'batch_size': 16,
'eval_batch_size': 16,

# Dataset configs
'dataset': 'custom',
'num_labels': 3500,
'num_classes': 7,
'img_size': 64,
'crop_ratio': 0.875,
'data_dir': '/kaggle/input/augmented-balanced/augmented_balanced_splitted',  # Change this to your main folder path

# Algorithm specific configs
'hard_label': True,
'uratio': 2,
'ulb_loss_ratio': 1.0,

# Device configs
'gpu': 0,
'world_size': 1,
'num_workers': 2,
'distributed': False,

}
config = get_config(config)

create model and specify algorithm

algorithm = get_algorithm(config, get_net_builder(config.net, from_name=False), tb_log=None, logger=None)
image_data = df.drop("label", axis=1).values
data = image_data.reshape((-1, 64, 64, 3))
data = np.uint8(data)
target = df["label"]
lb_data, lb_target, ulb_data, ulb_target = split_ssl_data(config, data, target, 7,
config.num_labels, include_lb_to_ulb=False)
train_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])

train_strong_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])

lb_dataset = BasicDataset(config.algorithm, lb_data, lb_target, config.num_classes, train_transform, is_ulb=False)
ulb_dataset = BasicDataset(config.algorithm, lb_data, lb_target, config.num_classes, train_transform, is_ulb=True, strong_transform=train_strong_transform)
df2=pd.read_csv("/kaggle/input/unnormalized-64x64/image_data_64x64_validation.csv")
df2['label'] = df2['label'].astype(int)
image_data2 = df2.drop("label", axis=1).values
eval_data = image_data2.reshape((-1, 64, 64, 3))
data = np.uint8(data)
eval_target = df2["label"]
eval_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])

eval_dataset = BasicDataset(config.algorithm, eval_data, eval_target, config.num_classes, eval_transform, is_ulb=False)

define data loaders

train_lb_loader = get_data_loader(config, lb_dataset, config.batch_size)
train_ulb_loader = get_data_loader(config, ulb_dataset, int(config.batch_size * config.uratio))
eval_loader = get_data_loader(config, eval_dataset, config.eval_batch_size)

training and evaluation

trainer = Trainer(config, algorithm)
trainer.fit(train_lb_loader, train_ulb_loader, eval_loader)`
image

I am facing the same problem, how did you solve it please ?

commented

Hi. The reason for the bug is due to the mismatch between the input image size and acceptable model image size.
The checkpoint 'vit_tiny_patch2_32_mlp_im_1k_32.pth' can only take 32x32 images.