omni-us / jsonargparse

Implement minimal boilerplate CLIs derived from type hints and parse from command line, config files and environment variables

Home Page:https://jsonargparse.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Is it possible to change the init_args of list elements with giving ActionConfigFile

vinnamkim opened this issue · comments

This is a minimal reproduction. If I have this Python code which implements jsonargparse.ArgumentParser

# test.py

from lightning.pytorch.cli import LRSchedulerCallable, ReduceLROnPlateau
from jsonargparse import ArgumentParser, ActionConfigFile

class Model:
    def __init__(
        self,
        schedulers: list[LRSchedulerCallable] = lambda optim: ReduceLROnPlateau(
            optim,
            monitor="val/map50",
        ),
    ):
        self.schedulers = schedulers


if __name__ == "__main__":
    parser = ArgumentParser()
    parser.add_argument("--model", type=Model)
    parser.add_argument("--config", action=ActionConfigFile)
    cfg = parser.parse_args()
    print(cfg)

, and this YAML configuration file for it.

# test.yaml

model:
  class_path: __main__.Model
  init_args:
    schedulers:
      - class_path: torch.optim.lr_scheduler.OneCycleLR
        init_args:
          max_lr: 0.1
      - class_path: lightning.pytorch.cli.ReduceLROnPlateau
        init_args:
          monitor: val/custom
          mode: max
          factor: 0.1
          patience: 4

What I want to is to override the monitor value of ReduceLROnPlateau without changing the other configuration values in the YAML file. For example,

python test.py --config test.yaml --model.schedulers.monitor val/mAP50 --print_config

The expect output I want is

model:
  class_path: __main__.Model
  init_args:
    schedulers:
    - class_path: torch.optim.lr_scheduler.OneCycleLR
      init_args:
        max_lr: 0.1
        ...
    - class_path: lightning.pytorch.cli.ReduceLROnPlateau
      init_args:
        monitor: val/mAP50
        mode: max
        factor: 0.1
        patience: 4

However, what I got is

model:
  class_path: __main__.Model
  init_args:
    schedulers:
    - class_path: torch.optim.lr_scheduler.OneCycleLR
      init_args:
        max_lr: 0.1
        ...
    - class_path: lightning.pytorch.cli.ReduceLROnPlateau
      init_args:
        monitor: val/mAP50
        mode: min
        factor: 0.1
        patience: 10
        threshold: 0.0001
        threshold_mode: rel
        cooldown: 0
        min_lr: 0.0
        eps: 1.0e-08
        verbose: false

The other values of ReduceLROnPlateau are reset to its default, not ones in the configuration file.

Is there any method to allow this kind of overriding?

I would consider this a bug. The behavior of a class instance type hint, when overriding a single init arg of the class, is only to change that arg and keep all other init_args as before. But this seems to not be working for callables that return a class instance. The behavior must be consistent for these two cases.

Note that in #460 there is a request to always reset to defaults. If this gets implemented still the default behavior should be the overriding that you want.

Thank you for reporting!

Initially I had though that overriding of arguments was not working correctly for all callable types that return a class. However, after another look I noticed that without the list, i.e. scheduler: LRSchedulerCallable, the overriding works correctly. Thus, the issue is only when the type is nested. Unfortunately this might mean it is more difficult to fix.

Another comment unrelated to the bug. In the reproduction code the default for schedulers is invalid because it is not a list of callables.