langchain-ai / auto-evaluator

Home Page:https://autoevaluator.langchain.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Isolated evaluation of retrieval

TSFelg opened this issue · comments

The current implementation makes it possible to evaluate the end-to-end performance of the LLM application, but it would be useful to be able to evaluate the retrieval part of the system in isolation.

The process would be similar to the current one, but when generating the Q&A pairs on a per-chunk basis, we would also need to store a reference to the chunks that were used. The idea would be automatically generate BEIR-compatible datasets like the following:

corpus = {
    "doc1" : {
        "title": "Albert Einstein", 
        "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
                 one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
                 its influence on the philosophy of science. He is best known to the general public for his mass–energy \
                 equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
                 Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
                 of the photoelectric effect', a pivotal step in the development of quantum theory."
        },
    "doc2" : {
        "title": "", # Keep title an empty string if not present
        "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
                 malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
                 with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
    },
}

queries = {
    "q1" : "Who developed the mass-energy equivalence formula?",
    "q2" : "Which beer is brewed with a large proportion of wheat?"
}

qrels = {
    "q1" : {"doc1": 1},
    "q2" : {"doc2": 1},
}

A dataset like this could then be used to calculate metrics like recall similarly to the BEIR benchmark.

Is this possible with the current implementation, or are there plans to support in the future?