keon / awesome-nlp

:book: A curated list of resources dedicated to Natural Language Processing (NLP)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add more languages corpora, tools and research

NirantK opened this issue Β· comments

Indic

  • Hindi - To be done later
  • Gujarati - To be done later
  • Tamil
  • Telegu
  • Bengali

Asian

  • Chinese
  • Korean
  • Japanese

We should be able to add content regarding Indian/Indic languages as well, keeping in mind the growth of India Stack and need for Indic tools.

I will be working on this issue for a brief duration.

I'd love to assist you @the-ethan-hunt if you are up to take the lead on this.

@NirantK , I would be happy to be assisted by you! But the pond is too large and the fish too small. πŸ˜…

You are right!

@NirantK , a simple GitHub search is leading nowhere. Any leads to start this? πŸ˜…

@NirantK , I went old school and discovered two good papers worth to be mentioned in this list:

  • A POS tagger and chunker system for Hindi language using Maximum Entropy Markov Model. Here is the paper
  • A lightweight stemmer for Hindi link
    IMHO, there has been negligible research conducted for NLP in Tamil, Telugu, Marathi and other Indian languages

Welcome @arpitabatra to the thread. She did her thesis on Hindi Text Processing. Some of the cool stuff she mentioned:

Thanks for the search @the-ethan-hunt, they look good. Let's go a little wide in the beginning and then we can trim down. Sounds good?

@NirantK , sure!
Thanks for the stuff Arpita! And welcome to awesome-nlp! πŸ˜„

POS tagging related papers.

  • Morphological Richness Offsets Resource Demand- Experiences in Constructing a POS Tagger for Hindi
    link
  • Building Feature Rich POS Tagger for Morphologically Rich Languages: Experiences in Hindi
    link
  • Hindi POS Tagger Using Naive Stemming : Harnessing Morphological Information Without Extensive Linguistic Knowledge
    link

@NirantK and @the-ethan-hunt : shall we explore some papers which are not only statistics based but also uses some linguistic cues? Since the datasets of large size are unavailable for Hindi, it will become difficult to train the models.

Sure @arpitabatra, that's a good insight. We should definitely look into those. Please do help us around that.

Sidenote: If there are any glaring holes in Hindi Text Processing, please mention them as well, they can become research avenues for people after us. We can note and detail those challenges in a separate repository/markdown file as well.

@the-ethan-hunt, I think we both can focus on Gujarati/other Indic languages as @arpitabatra has been kind enough to share her expertise with us on Hindi. What do you think?

Edit: I've added Gujarati in the task list above keeping in mind the comment by @the-ethan-hunt stating that prima facie, no good work was found for Tamil, Telegu and Bengali.

@NirantK , yes I agree with you. And while tinkering around, I found this. There are huge treebanks of several languages(both Indian and foreign).
Should I make a PR for this?

Hey, @the-ethan-hunt that's a good find.

Let's link to Hindi specific work for now.

Maybe we need to look into more tooling, datasets and academic work beyond treebanks, POS taggers and actually compile the best from what is out there?

If I was starting looking into Hindi NLP, the above list of work is not even 20-30% of what I'd need to get started.

@NirantK , there is also this treebank prepared by several American universities. The thing here is, that both the mentioned treebanks are annotated ones; this would largely help linguistics experts and NLP scientists to stop using their time annotating their corpora.

That is already in the list from @arpitabatra :)

@NirantK , any points we can start working? Like the Universal Dependencies thing? πŸ˜„

@the-ethan-hunt
Why we need data to work in NLP?

  • Dictionaries and WordNets are useful for syntactic tasks
  • Large text corpus is useful for lot of tasks such as text classification, text embeddings, and so on

Then, in terms of data, we need the following:

  • Dictionaries e.g. Gujarati to English and vice versa
  • Large News corpus similar to CNN or DailyMail ones

I hope that this helps us streamline our efforts. I will look into large news corpus, if unavailable at least list them down a few major websites which we can use to generate that dataset.

Hey @arpitabatra @the-ethan-hunt, please go ahead and raise one PR for Hindi datasets (excluding the work on POS, Stemming etc) as soon as you have sometime?

The work isn't quite sufficient to get started in terms of tools, but I think we should share the datasets atleast as we've done for Spanish.

@NirantK , regarding the shift of the language section to NLP-Progress as discussed in this thread, should I raise PRs for new resources here or at NLP-Progress?
Does it sound alright, @sebastianruder ? πŸ˜…

@the-ethan-hunt

If there are performance numbers available, or high user trust in that lib - raise it directly at NLP-Progress.
If not, raise them there here for now.

There is a lot of work which does not have results. E.g. datasets, Python libs in Arabic/Hindi etc.
They are good enough for programmers quite often. We can discuss and sort those edge cases out.

Just to be clear: awesome-nlp should stay awesome, so we shouldn't remove anything from here from now and awesome-nlp should still be the place where libraries and tools, etc. are collected.
As @NirantK mentions, anything with reported results and standard evaluation setups can be added to nlpprogress.

@NirantK i think this library can be added as tool for indic languages
http://anoopkunchukuttan.github.io/indic_nlp_library/

also @NirantK I think ACL 2018 highlights by Sebastian ruder should be added to the research trends and summaries

@Shashi456 please raise a MR for Ruder's highlights with a 1 line explanation and we'll review the same?

@NirantK do you know of any Indic libraries other than that, i've been scourging the internet for some but have found none satisfiable

Hello @NirantK, I made this project for clustering/topic extraction: https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA

It also contains a tutorial explaining the architecture: https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA/blob/master/Stemming-words-from-multiple-languages.ipynb

It also has unit tests.

All those languages are supported:

  • Danish
  • Dutch
  • English
  • Finnish
  • French
  • German
  • Hungarian
  • Italian
  • Norwegian
  • Porter
  • Portuguese
  • Romanian
  • Russian
  • Spanish
  • Swedish
  • Turkish

I was hesitating whether or not to add a new section, such as "Many languages". My question is: what would you do? Where would you add this?

Thank you!

@guillaume-chevalier that should go under Libraries -> Python. Please raise a PR. Great to see a multi-lingual clustering toolkit!

commented

Hi @NirantK, I can support Traditional Chinese translation and I am currently working on it.
This is really awesome repo, hope more people can see.

Wow, this is fantastic @NeroCube! Can you please raise a PR with your translation?

@NirantK Do you think adding links to repos NLP for Hindi, NLP for Punjabi , NLP for Sanskrit, NLP for Gujarati, NLP for Kannada, NLP for Malayalam, NLP for Nepali, NLP for Odia, NLP for Marathi, NLP for Bengali, NLP for Tamil, NLP for Urdu under the Indic Languages section would be helpful? All these repos contain Language Models, Classifiers and Tokenizers, along with the dataset used to train models, for their respective languages and are being used in iNLTK

We already have iNLTK, which in turn links to all of the above.

Maybe not add all of them? This might get spammy.

Yes okay! That seems right! Thanks!

Thank you everyone who has contributed to the multiple languages work, here on Awesome-NLP. While we continue to welcome the contributions along similar lines, we have some sort of coverage now.

I'm closing this issue for now. We will open new issues to encourage specific languages.