Should let the user know about potential issues with map_rerank chain type in 03L
dbbnicole opened this issue · comments
dbbnicole commented
In LLM 03L - Building LLM Chains Lab
notebook - Question 3, when the user tries map_rerank chain type, the model we use ('google/flan-t5-large') may not be powerful enough to consistently generate scores that can be parsed by the default parser in the default prompt in order for the results to be ranked.
The student may see error messages like
Could not parse output: [score between 0 and 100]
There are similar reports of this issue on langchain langchain-ai/langchain#3970
brookewenig commented
Thanks for reporting - fixed in the upcoming release!