Evelinchen / NLP-Projects

text preprocess, word2vec, sentence2vec, text classification (includes sentiment analysis), Chinese word segmentation, Hidden Markov Model, CRFs, named entity recognition, knowledge graph, dialog system, machine reading comprehension, pretraining language model (i.e., BERT, ELMo, GPT)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

NLP-Projects

Natural Language Processing projects, which includes concepts and scripts about:

Concepts

1. Attention

  • Attention == weighted averages
  • The attention review 1 and review 2 summarize attention mechanism into several types:
    • Additive vs Multiplicative attention
    • Self attention
    • Soft vs Hard attention
    • Global vs Local attention

2. CNNs, RNNs and Transformer

  • Parallelization [1]

    • RNNs
      • Why not good ?
      • Last step's output is input of current step
    • Solutions
      • Simple Recurrent Units (SRU)
        • Perform parallelization on each hidden state neuron independently
      • Sliced RNNs
        • Separate sequences into windows, use RNNs in each window, use another RNNs above windows
        • Same as CNNs
    • CNNs
      • Why good ?
      • For different windows in one filter
      • For different filters
  • Long-range dependency [1]

    • CNNs
      • Why not good ?
      • Single convolution can only caputure window-range dependency
    • Solutions
      • Dilated CNNs
      • Deep CNNs
        • N * [Convolution + skip-connection]
        • For example, window size=3 and sliding step=1, second convolution can cover 5 words (i.e., 1-2-3, 2-3-4, 3-4-5)
    • Transformer > RNNs > CNNs
  • Position [1]

    • CNNs

      • Why not good ?
      • Convolution preserves relative-order information, but max-pooling discards them
    • Solutions

      • Discard max-pooling, use deep CNNs with skip-connections instead
      • Add position embedding, just like in ConvS2S
    • Transformer

      • Why not good ?
      • In self-attention, one word attends to other words and generate the summarization vector without relative position information
  • Semantic features extraction [2]

    • Transformer > CNNs == RNNs

References

3. Layer Normalization, batch normalization

Layer normalization is a normalization method in deep learning that is similar to batch normalization. In layer normalization, the statistics are computed across each feature and are independent of other examples. The independence between inputs means that each input has a different normalization operation.

Awesome public apis

Awesome packages

Chinese

English

Future directions

About

text preprocess, word2vec, sentence2vec, text classification (includes sentiment analysis), Chinese word segmentation, Hidden Markov Model, CRFs, named entity recognition, knowledge graph, dialog system, machine reading comprehension, pretraining language model (i.e., BERT, ELMo, GPT)


Languages

Language:OpenEdge ABL 62.3%Language:Jupyter Notebook 32.8%Language:Python 4.7%Language:Jsonnet 0.2%Language:Shell 0.1%