- This is the 2021 version. For previous year' course materials, go to this branch
- Lecture and seminar materials for each week are in ./week* folders, see README.md for materials and instructions
- YSDA homework deadlines will be listed in Anytask (read more).
- Any technical issues, ideas, bugs in course materials, contribution ideas - add an issue
- Installing libraries and troubleshooting: this thread.
-
week01 Word Embeddings
- Lecture: Word embeddings. Distributional semantics. Count-based (pre-neural) methods. Word2Vec: learn vectors. GloVe: count, then learn. Evaluation: intrinsic vs extrinsic. Analysis and Interpretability. Interactive lecture materials and more.
- Seminar: Playing with word and sentence embeddings
- Homework: Embedding-based machine translation system
-
week02 Text Classification
- Lecture: Text classification: introduction and datasets. General framework: feature extractor + classifier. Classical approaches: Naive Bayes, MaxEnt (Logistic Regression), SVM. Neural Networks: General View, Convolutional Models, Recurrent Models. Practical Tips: Data Augmentation. Analysis and Interpretability. Interactive lecture materials and more.
- Seminar: Text classification with convolutional NNs.
- Homework: Statistical & neural text classification.
-
week03 Language Modeling
- Lecture: Language Modeling: what does it mean? Left-to-right framework. N-gram language models. Neural Language Models: General View, Recurrent Models, Convolutional Models. Evaluation. Practical Tips: Weight Tying. Analysis and Interpretability. Interactive lecture materials and more.
- Seminar: Build a N-gram language model from scratch
- Homework: Neural LMs & smoothing in count-based models.
-
week04 Seq2seq and Attention
- Lecture: Seq2seq Basics: Encoder-Decoder framework, Training, Simple Models, Inference (e.g., beam search). Attention: general, score functions, models. Transformer: self-attention, masked self-attention, multi-head attention; model architecture. Subword Segmentation (BPE). Analysis and Interpretability: functions of attention heads; probing for linguistic structure. Interactive lecture materials and more.
- Seminar: Basic sequence to sequence model
- Homework: Machine translation with attention
-
week05 Transfer Learning
- Lecture: What is Transfer Learning? Great idea 1: From Words to Words-in-Context (CoVe, ELMo). Great idea 2: From Replacing Embeddings to Replacing Models (GPT, BERT). (A Bit of) Adaptors. Analysis and Interpretability. Interactive lecture materials and more.
-
week06 Domain Adaptation
- Lecture: General theory. Instance weighting. Proxy-labels methods. Feature matching methods. Distillation-like methods.
- Seminar+Homework: BERT-based NER domain adaptation
-
week07 Model compression and acceleration
-
week08 Probabilistic inference, generative models and hidden variables
-
week09 Machine translation
-
week10 Relation extraction
-
week11 Summarization
-
week12 Style Transfer
-
week13 Dialogue systems
-
week14 AI & ML generated art
Course materials and teaching performed by
- Elena Voita - course admin, lectures, seminars, homeworks
- Boris Kovarsky - lectures, seminars, homeworks
- David Talbot - lectures, seminars, homeworks
- Just Heuristic - lectures, seminars, homeworks
- Alexey Tikhonov @altsoph
- Michael Sejr Schlichtkrull
- Arthur Bražinskas
- Ivan Yamshchikov
- Nikolay Zinov
- Sergey Gubanov
- Vyacheslav Alipov
- Vladimir Kirichenko
- Andrey Zhigunov
- Pavel Bogomolov