anishkasachdeva / N-Gram_Language_Modeling

Built N-gram language models for two different different text corpus. Applied smoothing techniques, namely, Kneser Ney and Witten Bell. Calculated perplexity scores for each sentence of both the corpus for each of the models and also calculated average perplexity score on the train corpus. Compared and analyzed the behaviour of the different LMs.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

anishkasachdeva/N-Gram_Language_Modeling Issues

No issues in this repository yet.