githubharald / SimpleHTR

Handwritten Text Recognition (HTR) system implemented with TensorFlow.

Home Page:https://towardsdatascience.com/2326a3487cd5

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Doubling number of conv layers improves accuracy

Chazzz opened this issue · comments

I'm not sure the title is tremendously surprising to anyone, but I cleared 76% word accuracy with a deeper network. More interestingly, using a deeper network and terminating around epoch 25 yields a 74-75% word accuracy model, which is better and faster than training a smaller network to the bitter end.

screenshot from 2019-01-11 01-57-09

Relevant code:

		for i in range(numLayers):
			kernel = tf.Variable(tf.truncated_normal([kernelVals[i], kernelVals[i], featureVals[i], featureVals[i + 1]], stddev=0.1))
			conv = tf.nn.conv2d(pool, kernel, padding='SAME',  strides=(1,1,1,1))
			conv_norm = tf.layers.batch_normalization(conv, training=self.is_train)
			relu = tf.nn.relu(conv_norm)
			kernel2 = tf.Variable(tf.truncated_normal([kernelVals[i], kernelVals[i], featureVals[i+1], featureVals[i + 1]], stddev=0.1))
			conv2 = tf.nn.conv2d(relu, kernel2, padding='SAME',  strides=(1,1,1,1))
			conv_norm2 = tf.layers.batch_normalization(conv2, training=self.is_train)
			relu2 = tf.nn.relu(conv_norm2)
			pool = tf.nn.max_pool(relu2, (1, poolVals[i][0], poolVals[i][1], 1), (1, strideVals[i][0], strideVals[i][1], 1), 'VALID')

Oh, well done. Thank you. And I have one question. How do you print this plot above?

Tensorboard plus a bunch of hooks which aren't committed anywhere.

thanks for sharing the results of your experiments.
I'll like to keep the model as simple and minimalistic as possible, but I'll link to this issue from the "Improve accuracy" section such that others can benefit from your findings.

Expanding the layers a bit more, I hit a top word accuracy of 78% using layer depth/width values similar to VGG16 but with batch normalization. Based off my other hyperparameter runs, increasing model size further than that won't meaningfully impact accuracy without a resnet-like approach (obviously outside the scope of this project).

When increasing the model size, at some point the model is able to perfectly learn the training data without improving validation accuracy, i.e. it overfits. Therefore you could try to make the task a bit harder while training by using data augmentation. At the moment, the model is very sensitive to small translations (see this article) [1]. By adding random translations, validation accuracy should get better.

[1] However, this behaviour improved since you uploaded the new pretrained model.

Nice article btw. My results show that even with more layers the model only overfits by about 5% (even with data augmentation off!), and accuracy takes about a 1% hit when turning data augmentation off. If anything the model insufficiently overfits (by not overfitting on train it effectively underfits on test). He et al., 2015 demonstrated that increasing the number of layers is insufficient to guarantee overfit, and I would expect their results to apply to SimpleHTR as well.

commented

@Chazzz can you kindly let me know of a rough idea of how much time it took you to train the system with your system specific details. I am planning to apply a range of image augmentation like translation, adding Gaussian noise, Random cropping and etc to make the model more robust.

Hi @RajPratim21 I trained the above on a GTX 980 Ti, and as shown in the graph in my initial post, training would take between 40 mins to 80 mins. LMK if there are other system details which are of interest.

Tensorboard plus a bunch of hooks which aren't committed anywhere.

possible to share the code for the tensorboard integration? thanks!

@jevinruv Let me check, it should be possible.

@jevinruv Let me check, it should be possible.

Thank you, looking forward for it!