ottokart / punctuator2

A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text

Home Page:http://bark.phon.ioc.ee/punctuator

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Friendly hello - pingback

vackosar opened this issue · comments

Hello!

I was inspired by your project and created simplified alternative. I cited you in the readme: https://github.com/vackosar/keras-punctuator

Let me know if you find this interesting.

Vaclav

WOW! I never would have thought to try a Conv1D model. 92% precision is impressive. Although you are using binary punctuation (yes or no) verses 8 categories. I'll try this model out in addition to my other experiments (such as using POS tags).

How many epochs did it take to get those results?

This is very promising. I plugged this rough architecture into EuroParl with 8 punctuation categories and am getting 83% precision, 48% recall after just 5 epochs. It is training 50x faster than @ottokart's model -- which gives me freedom to experiment and get results much faster. I'll post the results after some experimenting.

Hi!

that's definitely very interesting, thanks for sharing! 50x speedup is really impressive.
Do you include the no-punct category in your precision/recall/f-score computations?
If so, then what are the scores without no-punct?

Best,
Ottokar

Hi Ottokar,

I am using the TensorFlow Estimator API since it is very efficient and designed to take models into large-scale distributed production.

It works well with large volumes of static data. Training went perfectly. But I am running into an issue since predicting punctuation needs to be done one line at a time since we are using EOS tokens from the previous results to partition.

I need to figure out how to create an input_fn for estimator.predict which will accept one line at a time asynchronously.

I've done some manual testing and results are very promising...

i hope the commission and the council will come forward with proposals on this matter .PERIOD mr president ,COMMA like so many others ,COMMA i want to congratulate the irish presidency on the success of its term of office and to say that ,COMMA because smaller countries have fewer resources ,COMMA the success which they achieve ,COMMA therefore deserves greater commendation .PERIOD i want to compliment mr bruton ,COMMA the taoiseach ,COMMA mr spring ,COMMA the tánaiste and mr mitchell ,COMMA all of whom worked extremely hard and contributed immensely to that success .PERIOD i want also to acknowledge today that it was not just while mr bruton was president-in-office ,COMMA that he was dedicated to the european ideal .PERIOD

This looks very nice indeed.

I didn't measure f-scores without no-punct. I am very sceptical regarding the precision of the app.

BTW, if you like the project you can link back to it in your readme.

I got it working (after hacking the TensorFlow prediction function)!!!

PUNCTUATION PRECISION RECALL F-SCORE
,COMMA 58.2 62.1 60.1
.PERIOD 73.0 59.4 65.5
?QUESTIONMARK 58.9 11.2 18.9
!EXCLAMATIONMARK 75.0 8.8 15.7
:COLON 53.4 25.0 34.1
;SEMICOLON 46.1 10.7 17.4
-DASH 56.9 9.7 16.6
Overall 63.7 57.6 60.5

It doesn't reach the full accuracy of your model, but still quite impressive for training 40-50x faster. It peaked at about 17 epochs (a few hours on my CPU) but I didn't have early stopping on and lost the checkpoint (peak precision was a few percent higher). There's also definite room for improvement by tuning hyper-parameters.

@vackosar There's and alternatives section in the readme now that refers to your work as well.

The problem is that the precision/recall calculation in keras-punctuator is simply wrong. The real numbers are much lower and thats why you get impression it trains faster. The overall approach with CNN is about the same as CNN-2A from X. Che, C. Wang, H. Yang, and C. Meinel, “Punctuation predic- tion for unsegmented transcript based on word vector" paper also referenced in punctuator2 paper and the numbers match the results in the punctuator2 paper table 2 (about 54% overall f-score instead of 64% for punctuator2)

I would definitely like to fix the precision/recall calculation. I agree that there is obviously something wrong with it.
On the other hand the model does seem to converge faster to it's best possible result - which is worse than with the other network.

@vackosar this article has multiple factual mistakes and misinterpretations. For example take:

Upon publication, the feed-forward, autoregressive WaveNet was a substantial improvement over LSTM-RNN parametric models.

Wavenet is a different codec operated on sample-by-sample basis thats why it got more quality than previous vocoder-based architectures, they do not compare apples to apples here.

On the Billion Word Benchmark, an intriguing Google Technical Report suggests an LSTM n-gram model with n=13 words of memory is as good as an LSTM with arbitrary context.

Actually perplexity difference is significant. 46 vs 43 is a meaningful difference in many applications.

And so forth.