confident-ai / deepeval

The LLM Evaluation Framework

Home Page:https://docs.confident-ai.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

`deepeval test run` takes default pytest `addopts` with no option to override

shippy opened this issue · comments

Describe the bug
deepeval inherits pytest addopts with no option to overrule them. (This means that by-default excluded evals cannot be run by deepeval test run under any circumstance.)

To Reproduce

  1. Mark an arbitrary test @pytest.mark.llm
  2. Add to pyproject.toml the following section:
[tool.pytest.ini_options]
markers = [
    "llm: mark tests with LLM involvement"
]
addopts = "-m 'not llm'"
  1. Run pytest to check that default behavior applies and test is not run
  2. Run deepeval run tests
  3. Run pytest -m llm to check that the behavior is inverted
  4. Run deepeval test run -m llm, or deepeval test run path/to/excluded/test to attempt (& fail) to invert the behavior with deepeval

Expected behavior
I'd expect that deepeval will allow me to overrule pytest default behavior with -m and other similar pytest-like switches (e.g. -k).

Desktop (please complete the following information):

  • OS: macOS 14.4.1

Additional context

  • Python: 3.11.8
  • pytest: 8.1.1
  • deepeval: 0.21.26

@shippy Thank you so much!