Built N-gram language models for two different different text corpus. Applied smoothing techniques, namely, Kneser Ney and Witten Bell. Calculated perplexity scores for each sentence of both the corpus for each of the models and also calculated average perplexity score on the train corpus. Compared and analyzed the behaviour of the different LMs.