Is the dataset wrong?
youzi260 opened this issue · comments
My dataset uses "MSCOCO/COCO-2015/annotations/captions_train2014.json". When I run the code, the following error occurred. Is the dataset wrong?
#########################################################################
Not using distributed mode
Creating dataset
Creating model
reshape position embedding from 196 to 256
_IncompatibleKeys(missing_keys=[], unexpected_keys=['head.weight', 'head.bias'])
Downloading: 100%440M/440M [05:36<00:00, 1.31MB/s]
Traceback (most recent call last):
File "/home/whhhh/ZTTsar/TCL-main/Pretrain.py", line 204, in
main(args, config)
File "/home/whhhh/ZTTsar/TCL-main/Pretrain.py", line 121, in main
model = ALBEF(config=config, text_encoder=args.text_encoder, tokenizer=tokenizer, init_deit=True)
File "/home/whhhh/ZTTsar/TCL-main/models/model_pretrain.py", line 74, in init
[self.text_encoder,self.text_encoder_m],
File "/home/whhhh/anaconda3/envs/yolov7/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'ALBEF' object has no attribute 'text_encoder_m'
Process finished with exit code 1
Hi, according to the error message, it is not related to the data. Have you get the same error on other datasets?
BTW, are you fine-tuning a pre-trained checkpoint on the COCO dataset?