- AAAI 2017 accepted-papers
- http://nlp.stanford.edu/read/
- https://arxiv.org/list/cs.CL/recent
- http://aclweb.org/anthology/
- Siamese Recurrent Architectures
- [AAAI16]Siamese Recurrent Architectures for Learning Sentence Similarity.pdf
1)Using a simple adaptation of the LSTM to learn a highly struc-tured space of sentence-representations.
2)Using a simple Manhattan metric to measure the similarity.
- [AAAI16]Siamese Recurrent Architectures for Learning Sentence Similarity.pdf
- Attention Mechanism
- In the RNN architecture, those hidden states near the end of the sentence are expected to capture more information.
- The near-the-end hidden variables will be more attended, which may result in a biased attentive weight.
- Use soft attention mechanism to obtain the representation of one sentence by depending on representation of another sentence
- Build the interaction at different granularity (word, phrase and sentence level)
- Reading Comprehension
- [ICLR17UnderReview] LEARNING RECURRENT SPAN REPRESENTATIONS FOR EQA.pdf
Effciently builds fixed length representations of all spans in the evidence document with a recurrent network. - [ICLR17UnderReview] DYNAMIC COATTENTION NETWORKS.pdf
The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. - [arXiv16] End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension.pdf
Extract and Rank a set of answer candidates from a given document to answer questions.
- [ICLR17UnderReview] LEARNING RECURRENT SPAN REPRESENTATIONS FOR EQA.pdf