A simple and easy to understand NLP teaching
- statistical method: inverted-index | tfidf | tfidf-sklearn | hmm
- word vector: cbow | skip-gram
- sentence vector: textcnn | seq2seq | seq2seq-cnn
- attention: seq2seq-BahdanauAttention | seq2seq-LuongAttention | transformer
- large Language Model: elmo | gpt | bert
paper1:Keyword-in-Context Index for Technical Literature
paper2:The Inverted Multi-Index
paper1:Efficient Estimation of Word Representations in Vector Space
paper1:Efficient Estimation of Word Representations in Vector Space
paper1:Convolutional Neural Networks for Sentence Classification
paper2:A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification
paper1:Sequence to Sequence Learning with Neural Networks
paper1:Convolutional Neural Networks for Sentence Classification
paper1:Neural Machine Translation By Jointly Learning To Align And Translate
paper1:Effective Approaches to Attention-based Neural Machine Translation
paper1:Attention Is All You Need
paper1:Deep contextualized word representation
paper1:Improving Language Understanding by Generative Pre-Training
paper1:BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding
Note: There is a slight difference between the model of this Bert and the original paper. I have used an improved training method, which is already written at the beginning of the code. You can check it yourself
- Translate a corresponding Pytorch code
- Supporting video explanation (this may take a long time)
- Recruit interested students to complete together
2、动手学深度学习 web
3、李沐老师 bilibili