Knowledge Embedding Interface Specification for RAG in NNTrainer
hokeun opened this issue · comments
Preliminary research: Pre-training Knowledge Fusion
- Assume that we describe knowledge as Subject-Predicate-Object (SPO) forms.
- Delve into and analyze NNTrainer modules that handle user inputs (texts).
- Decide whether transformation from the knowledge to the input is necessary, and specify transform(or else) logic from the above knowledge to the inputs if needed.
- Devise a few sample scenarios for Question Answering (QA) to analyze the effects of knowledge embeddings (or prompting).
References
- AAAI'22 - DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding
- EMNLP'22 - Knowledge prompting in pre-trained language model for natural language understanding
- EMNLP'19 - Incorporating External Knowledge into Machine Reading for Generative Question Answering
Disclaimer
NOTE: We're open to suggestions. Please let us know in the comments if you have any feedback or guidance!
Thanks for bringing this topic to NNTrainer. We are happy to help.