harvardnlp / im2markup

Neural model for converting Image-to-Markup (by Yuntian Deng yuntiandeng.com)

Home Page:https://im2markup.yuntiandeng.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I got poor results on my own screenshots

NHZlX opened this issue · comments

I used the screenshots of my computer to intercept the formula in the papers. But none of them can be identified. Are there any special data processing methods for those data?

Are you using the provided model to translate a screenshot? If so it’s not surprising because neural network is very domain specific: at training time it only saw images of a single font size, so at test time if the font size or style is different, it would probably fail. For recognizing screenshots, maybe try resizing the image to be of approximately the same font size as used in training set would probably give better results, but I strongly recommend training a new model for your task.

I'm trying to translate equations extracted from pdf images but facing the same problems, probably due to font size and style. Do you think that simply rendering the equations in different fonts and styles and retraining the model might help? Also, it'd be a big help if you can provide the latex code you used to generate the equation images with transparent backgrounds.

commented

@GohioAC

See this repo for tools used to generate the images for these experiments: https://github.com/Miffyli/im2latex-dataset (see this issue for explanation on how to change rendering setup: Miffyli/im2latex-dataset#8 ). For transparency you have to find proper tool for converting PDFs into rasterized images. The original code uses ImageMagick's convert which has --transparent command, but I do not know how well it works for this situation.

Generally augmenting the dataset with different fonts, sizes, noise-levels, backgrounds, perspectives etcetc should help creating more robust models, so it is worth a shot!