There are 4 repositories under dense-retrieval topic.
🚀 RocketQA, dense retrieval for information retrieval and question answering, including both Chinese and English state-of-the-art models.
A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
Train Models Contrastively in Pytorch
A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).
[SIGIR 2022] Multi-CPR: A Multi Domain Chinese Dataset for Passage Retrieval
WSDM'22 Best Paper: Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval
Code and models for the paper "Questions Are All You Need to Train a Dense Passage Retriever (TACL 2023)"
Nature Biotechnology: Ultra-fast, sensitive detection of protein remote homologs using deep dense retrieval
SIGIR 2021: Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling
An easy-to-use python toolkit for flexibly adapting various neural ranking models to any target domain.
CIKM'21: JPQ substantially improves the efficiency of Dense Retrieval with 30x compression ratio, 10x CPU speedup and 2x GPU speedup.
Code and data for reproducing baselines for TopiOCQA, an open-domain conversational question-answering dataset
揣摩研习社关注自然语言和信息检索前沿技术,解读热门科技论文,分享实用科研工具,挖掘人工智能冰山之下的学术和应用价值!
Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by Prithivi Da, For PRs and Collaboration checkout the readme.
Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
Source code of paper 'LED: Lexicon-Enlightened Dense Retriever for Large-Scale Retrieval' (WWW 2023)
All-in-One: Text Embedding, Retrieval, Reranking and RAG
🔗 A graph-augmented dense statute retriever. (EACL 2023)
Dual Cross Encoder for Dense Retrieval
Explore from keyword search to dense retrieval and reranking, which injects the intelligence of LLMs into your search system, making it faster and more effective.
Code for the paper: Modular Retrieval for Generalization and Interpretation.
efficient query encoding for dense retrieval
Code and created datasets for our ACL 2022 paper: "Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations"