To train on my own dataset
xxxpsyduck opened this issue · comments
Hi. I created lmdb dataset on my own data by running create_lmdb_dataset.py. then I run the train command on it and got the following output:
CUDA_VISIBLE_DEVICES=0 python3 train.py --train_data result/train --valid_data result/test --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn
dataset_root: result/train
opt.select_data: ['MJ', 'ST']
opt.batch_ratio: ['0.5', '0.5']
dataset_root: result/train dataset: MJ
Traceback (most recent call last):
File "train.py", line 283, in
train(opt)
File "train.py", line 26, in train
train_dataset = Batch_Balanced_Dataset(opt)
File "/home/mor-ai/Work/deep-text-recognition-benchmark/dataset.py", line 37, in init
_dataset = hierarchical_dataset(root=opt.train_data, opt=opt, select_data=[selected_d])
File "/home/mor-ai/Work/deep-text-recognition-benchmark/dataset.py", line 106, in hierarchical_dataset
concatenated_dataset = ConcatDataset(dataset_list)
File "/home/mor-ai/.local/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 187, in init
assert len(datasets) > 0, 'datasets should not be an empty iterable'
AssertionError: datasets should not be an empty iterable
Can you help me resolve this?
One more thing: my dataset has data in non-latin language. Do I need to make any modifications to train.py or any other files?
Hello,
- change
deep-text-recognition-benchmark/train.py
Lines 229 to 232 in 6dc16df
into
parser.add_argument('--select_data', type=str, default='/',
help='select training data (default is MJ-ST, which means MJ and ST used as training data)')
parser.add_argument('--batch_ratio', type=str, default='1',
help='assign ratio for each selected data in the batch')
- you should change opt.character into your own character list.
deep-text-recognition-benchmark/train.py
Line 239 in 6dc16df
(2020.8.20 updated) Please read data filtering part carefully.
As a default setting, we filter
- length filtering: the images whose
len(label) > batch_max_length(=default is 25)
- character filtering: the images containing characters which are not in
opt.character
Furthermore, as a default, we make the label lowercase here.
To train the model for uppercase, comment out these lines. (do not use the opt.sensitive
option)
So, for your own dataset, you should modify opt.character
and comment out these lines.
Another way is just to comment out these filtering part (use --data_filtering_off
option.) or modify this part and comment out these lines.
Instead of the character filtering part, you can simply use [UNK]
token with the below modification (for attention decoder case).
- Add
[UNK]
token, change below
deep-text-recognition-benchmark/utils.py
Line 108 in 3c2c89a
tolist_token = ['[GO]', '[s]', '[UNK]']
- change below
deep-text-recognition-benchmark/utils.py
Line 136 in 3c2c89a
totext = [self.dict[char] if char in self.dict else self.dict['[UNK]'] for char in text]
If you do not use the character filtering part and use [UNK]
token, you should comment out these lines also.
Hope it helps.
Best
Thank for the reply. I'm working with japanese particularly. It has thousands of characters. So I have to copy all of them to the character list. Am I correct?
@boy977
I recommend
- make character list file such as
char_list.txt
which contains thousands of characters. - load
char_list.txt
intrain.py
, thenopt.character = char_loaded
.
as likedeep-text-recognition-benchmark/train.py
Line 266 in 6dc16df
Best
@ku21fan Hello. I tried to use "--PAD" option along with ''--rgb'' but I got the error
RuntimeError: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 0. Target sizes: [1, 100, 100]. Tensor sizes: [3, 100, 100]
It seems like PAD option only works with grayscale image or I did something wrong, didn't I?
@boy977 Yes, you will need to change some code to use "--PAD" option along with ''--rgb''.
For example, see https://github.com/clovaai/deep-text-recognition-benchmark/blob/master/dataset.py#L277
@boy977 Yes it was.
I just fixed it.
Please check the recent commit 9a6f667
Thank you for the report :)
@ku21fan one more dump question: what does "norm_ED" stand for?
@boy977 norm_ED is normalised edit distance it's another metric used to validate STR models.
@rahzaazhar About the norm_ED value, currently in the source code it is the sum of edit distance in all test cases. (It is not "normalized" yet).
I think we should divide it by the number of test cases, to compare the performance among different data sets.
How do you think about this?
@dviettu134 Thank you for the comment.
You are right,
I just updated normalized edit distance of ICDAR2019 version which divide it by the number of test cases.
Please check here. https://github.com/clovaai/deep-text-recognition-benchmark/blob/master/test.py#L139-L165
Best
Hi ku21fan
I'm using an intermediate (Latin) representation of Arabic characters, e.g. "Miim_B" for "م".
I've prepared the dataset and modified the characters' set in the train.py file as you recommended. When I ran the train, I had realised that each character has been split in a set of subcharacters: Miim_I -> 'm','I','I','m','_','b'
I 've tried to use .split(" ") in the utils.py/AttnLabelConverter/encode and I got this output:
['miim_b', 'saad_m', 'raa_e', '']
Traceback (most recent call last):
File "train.py", line 317, in
train(opt)
File "train.py", line 141, in train
text, length = converter.encode(labels, batch_max_length=opt.batch_max_length)
File "/Users/oussama.zayene/myfiles/Projects/OCR_projects/deep-text-recognition/utils.py", line 93, in encode
text = [self.dict[char] for char in text]
File "/Users/oussama.zayene/myfiles/Projects/OCR_projects/deep-text-recognition/utils.py", line 93, in
text = [self.dict[char] for char in text]
KeyError: 'miim_b'
Please Help!
@ooza Hello,
In my opinion.. there are 2 easy ways.
- just add & use 'م' into the character set, instead of 'miim_b'
or if you can't do this for some reason,
- you can work around with substitute characters for each of them.
ex) add ☆★○ to the character set.
and regard ☆ as 'miim_b', ★ as 'saad_m', ○ as 'raa_e'.
Hope it helps.
Best
@ku21fan Hello,
My testset is made up of variable-length Chinese characters. Then, for building a more effective training set, I decide to generate some data using module of ImageDraw in python.which one should I choose, a variable-length generated dataset or selected-length generated dataset(eg.32*100)
@ku21fan Hello,
My testset is made up of variable-length Chinese characters. Then, for building a more effective training set, I decide to generate some data using module of ImageDraw in python.which one should I choose, a variable-length generated dataset or selected-length generated dataset(eg.32*100)
我也遇到这个问题,怎么生成训练集呢?
I have had a problem with my alphabet. The model predicted only digits. The reason was the characters were in uppercase.
@ooza @ku21fan Could you please help me how to solve the problem of combining two characters and create new character like( شك = ش+ ك ) with characters of Arabic .
Here is some part of my character text file.
ء
۽
م
ݥ
ࢧ
ݦ
ه
ھ
ة
ۀ
ۂ
ݳ
ݴ
إ
ٳ
ل
ڶ
ڷ
ئ
ٸ
ێ
ݵ
ݶ
ي
ٕ
ٖ
ٜ
٠
١
٢
٣
٤
۴
۵
٥
٦
۶
٧
٨
٩
؍
؛
.
،
؟
٪
؉
؊
؆
؇
٭
٬
؞
«
»
‹
›
(
)
؏
۞
۩
۔
ـ
؎
@rm2886 I don't have enough time for testing my solution, but I want to help you. So, I would try to change --character argument from string to list. For example, you have an alphabet "abc", you should use ["a", "b", "c"]. It allows adding different combinations of symbols in the alphabet. Perhaps, you should change the data format from {imagepath}\t{label}\n to {imagepath}\t{l a be l} or something like that for getting a pair (imagepath - ["l", "a", "be", "l"]), because when the data are preparing, labels are processing just iterate by a string, but you should iterate by list (because you want the model thinks that a group of symbols is one symbol).
Something similar of what @2113vm said happened to me. My dataset was only digits + uppercase letters, so as @ku21fan suggested I skipped these lines:
deep-text-recognition-benchmark/dataset.py
Lines 209 to 210 in d38c3cb
and set --character as "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ". However, if you don't use the option --data_filtering_off
and you have uppercase labels on your dataset, you have to change:
deep-text-recognition-benchmark/dataset.py
Line 171 in d38c3cb
to
if re.search(out_of_char, label):
Otherwise, all letters will be skipped. This happened to me because I didn't use --data_filtering_off. In my case, it might be easier to use it and forget about the filtering part 'cause I had filtered my dataset previously, but I didn't notice. Anyway, I have found @ku21fan 's code for training pretty confortable, the way you print and log all the loss, accuracy, ground truth vs. predictions information during training is really useful and makes the process much easier, thank you!
Hope it helps to someone!
@ku21fan
hi ku21fan,
I'm using this model to train chinese datasets
I have 6000 character in len,
If I generate a million img&label to train it, do u think can it make the model converge.
Hi
I am trying to fine-tune the normal case insensitive model (TPS-ResNet-BiLSTM-Attn) by running the following command. I have also added 4 additional characters to opt.character
CUDA_VISIBLE_DEVICES=0 python3 train.py --train_data result/train --valid_data result/valid --select_data / --batch_ratio 1 --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn --FT --saved_model TPS-ResNet-BiLSTM-Attn_15000.pth
It's still showing the following error. Am I missing something?
Thanks for help.
Iknoor
Hi
I am trying to fine-tune the normal case insensitive model (TPS-ResNet-BiLSTM-Attn) by running the following command. I have also added 4 additional characters to opt.character
CUDA_VISIBLE_DEVICES=0 python3 train.py --train_data result/train --valid_data result/valid --select_data / --batch_ratio 1 --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn --FT --saved_model TPS-ResNet-BiLSTM-Attn_15000.pth
It's still showing the following error. Am I missing something?
Thanks for help.
Iknoor
The reason is your config is not the same TPS-ResNet-BiLSTM-Attn_15000.pth. You should not change the alphabet
2. opt.character
I see that your dataset code is already inconsistent with TRBA, if you change it to non-Arabic characters, is it already possible to make changes without data filtering or use UNK tokens?
Hello, everyone,
I have been trying for a long time to solve the error 'AssertionError: datasets should not be an empty iterable'.
Here is my solution:
-
After creating the dataset with the "create_lmdb_dataset.py" file, two files (data.mdb and lock mdb) will be created in the "result" folder.
-
I then created two new folders: the first called 'train', the second 'validation'.
-
Inside these two folders, I copied and pasted the "result" folder from the output of the "create_lmdb_dataset" file.
-
In the train script modify --select_data by inserting the word "train", instead of "/" as recommended here : #85 (but mantain batch_ratio=1).
-
Now use the command: 'py train.py --train_data train/result --valid_data validation/result --Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC --data_filtering_off --workers 0'.
For me in this way worked. As rule of thumb i would recommend anyway to try to access in different ways the folder in which your dataset has been created. If the error of "iterable dataset" will begin something related to pickle something and in the detail error you see your number samples you are on a right way.
Best