There are 4 repositories under language-models topic.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
Open-source offline translation library written in Python
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
A package built on top of Hugging Face's transformers library that makes it easy to utilize state-of-the-art NLP models
💁 Awesome Treasure of Transformers Models for Natural Language processing contains papers, videos, blogs, official repo along with colab Notebooks. 🛫☑️
This repository contains landmark research papers in Natural Language Processing that came out in this century.
Pre-trained models and language resources for Natural Language Processing in Polish
The "tl;dr" on a few notable transformer papers.
Language models are open knowledge graphs ( non official implementation )
This is a list of open-source projects at Microsoft Research NLP Group
Generate realistic Instagram captions using transformers 🤗
A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.
Must-read papers on Natural Language Processing (NLP)
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Transformer based Turkish language models
Keras implementations of three language models: character-level RNN, word-level RNN and Sentence VAE (Bowman, Vilnis et al 2016).
Document level Attitude and Relation Extraction toolkit (AREkit) for sampling mass-media news into datasets for your ML-model training and evaluation
Neural Network Language Model that generates text based off Lord of the Rings. Built with Pytorch.
Python implementation of an N-gram language model with Laplace smoothing and sentence generation.
The course notes about Stanford CS224n Natural Language Processing with Deep Learning Winter 2019 (using PyTorch)
Python source code for EMNLP 2020 paper "Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT".
Codes for the experiments in our EMNLP 2021 paper "Open Aspect Target Sentiment Classification with Natural Language Prompts"
This repo is the official resource of the paper "Knowledge Enhanced Masked Language Model for Stance Detection", NAACL 2021
Deep-learning Transfer Learning models of NTUA-SLP team submitted at the IEST of WASSA 2018 at EMNLP 2018.
Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre-train from scratch. We investigated if multilingual models could inherit these properties by making it an Efficient Transformer (s.a. the Longformer architecture).
The data and code for NumerSense (EMNLP2020)