Simple is good, but controllable is the best. So instead of using static site framework, I just using github + markdowns to manage my personal learning leveling-up.
Read paper Self-Attention with Relative Position Representations. This paper is the foundation of understanding how relative position embedding works. Not hard to understand but deserve to read.
Read paper Large Language Models Are Human-Level Prompt Engineers. The idea is of auto prompt engineering is interesting, but the use case is kind of limited as it:
- Needs Labeled training data
- Can not optimize prompt in sentence or paragraph level granularity.
Read paper Parameter-Efficient Transfer Learning for NLP. The approach is too simple even boring, but seems like it's effective and can help to understand what's a basic adapter.
Read paper Unsupervised Extractive Summarization using Pointwise Mutual Information. The style of this paper is similar as Simple Unsupervised Keyphrase Extraction using Sentence Embeddings: Low resources need, easy to understand and intuitive.
Read paper Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. This is a intuitive and elegant approach without depending a lot of resource (labeled data, GPUs, etc).