Future-House / paper-qa

High accuracy RAG for answering questions from scientific documents with citations

Home Page:https://futurehouse.gitbook.io/futurehouse-cookbook

Repository from Github https://github.comFuture-House/paper-qaRepository from Github https://github.comFuture-House/paper-qa

No verbosity when using manual document querying

gdurif opened this issue · comments

Despite setting verbosity to level 3, I do not get any verbosity when using doc querying whereas it works as expected with agent query.

Note 1: I am using paper-qa with a local ollama server (it works very well) and I provide a manifest file for document indexing.

MWE below:

import asyncio
import os
from paperqa.agents.main import agent_query
from paperqa import Docs, Settings

# API setup
os.environ['OPENAI_API_KEY'] = "ollama"

# input
paper_directory = "data"
manifest_file = os.path.join(paper_directory, "manifest.csv")
index_directory = os.path.join(paper_directory, "index")
os.makedirs(index_directory, exist_ok=True)

# model
model="ollama/llama3.2"

# paper-qa setup
local_llm_config = dict(
    model_list=[
        dict(
            model_name=model,
            litellm_params=dict(
                model=model,
                api_base="http://localhost:11434",
                timeout=600,
            ),
        )
    ]
)

agent = AgentSettings(
    agent_llm=model,
    agent_llm_config=local_llm_config,
    index=IndexSettings(
        index_directory=index_directory,
        manifest_file=manifest_file,
    ),
    timeout=600,
    search_count=1,
    paper_directory=paper_directory
)

agent.index.concurrency = 1

settings = Settings(
    temperature=0.2,
    llm=model,
    llm_config=local_llm_config,
    summary_llm=model,
    summary_llm_config=local_llm_config,
    embedding="ollama/mxbai-embed-large",
    agent=agent,
    parsing=ParsingSettings(
        use_doc_details=False,
    ),
    verbosity=3    # VERBOSITY LEVEL
)

settings.answer.max_concurrent_requests = 1
settings.answer.answer_max_sources = 3   # default is 5
settings.answer.evidence_k = 5           # default is 10

# index building
build_index(settings=settings)

# agent querying
answer = await agent_query(
    query="My awesome question",
    settings=settings,
)
# I get detailed verbosity.

# doc querying
docs = Docs()

await docs.aadd(
    os.path.join(paper_directory, "my_super_article.pdf"),
    settings=settings
)

await answer = docs.aquery(
    query="My awesome question",
    settings=settings,
)
# I get no verbosity.

Note 2: The only verbosity that I get with doc querying is Could not find cost for model ollama/llama3.2. (which was already discussed in #852 and is not the issue here).

Note 3: I tested with synchronous (Docs.add(), Docs.query()) and asynchronous (Docs.aadd(), Docs.aquery()) functions, the results is the same.

Thanks in advance

The reason you don't see verbosity output with manual document querying (Docs.aquery or Docs.query) is that these methods do not configure logging based on your Settings.verbosity. In contrast, agent querying (agent_query) explicitly sets up logging to match the verbosity level, which is why you see detailed output there.

To get verbosity with manual document querying, call configure_cli_logging(settings) before running your queries. For example:

from paperqa.agents import configure_cli_logging

configure_cli_logging(settings)
# Now run your Docs.aquery or Docs.query as usual

This will set the logging level globally and enable detailed output according to your verbosity setting. This difference is due to how logging is set up in the agent and CLI flows, but not in the Docs code path by default (source).

If you want this to happen automatically, you could add the logging configuration call at the start of your script or consider modifying the library to include it in the Docs methods.

Let me know if you need more details or help with this!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Join Discord Share on X

Ok, thanks @dosu

You're welcome!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Join Discord Share on X