[Question] Evaluate experiment with different model configuration
marcbenedi opened this issue · comments
Greetings,
When I try to evaluate from a checkpoint which has a different model configuration, I get an error. For example:
eval.yaml
...
defaults:
- model: x.yaml
...
ckpt_path: /path/to/experimentA.ckpt
experiment/experimentA.ckpt
defaults:
- model: x.yaml
model:
lat_dim: 300 # let's assume in model/x.yaml is 100
When I execute the src/eval.py
it tries to use the default model x.yaml
.
Is there any way to get the model configuration from the experiment? The properties that have been changed?
For now, I modify eval.yaml
and add all the modifications, but it feel wrong since I need to update the file for every minor modification the experiment was trying.
Thanks!
I also encountered this issue. Is there a way to pull a specific experiment's config to do the evaluation?