What happens when I use anomalib train --config?
luoyq6 opened this issue · comments
luoyq6 commented
Describe the bug
Dataset
Folder
Model
PatchCore
Steps to reproduce the behavior
anomalib train --config /opt/workspace/luoyq/anomalib/src/configs/model/patchcore.yaml
OS information
OS information:
- OS: [e.g. Ubuntu 20.04]
- Python version: [e.g. 3.10.0]
- Anomalib version: [e.g. 0.3.6]
- PyTorch version: [e.g. 1.9.0]
- CUDA/cuDNN version: [e.g. 11.1]
- GPU models and configuration: [e.g. 2x GeForce RTX 3090]
- Any other relevant information: [e.g. I'm using a custom dataset]
Expected behavior
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
patchcore.yaml
Logs
no
Code of Conduct
- I agree to follow this project's Code of Conduct
Samet Akcay commented
Can you show the output of anomalib --help
? For example here is mine:
Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Usage: anomalib [-h] [-c CONFIG] [--print_config [=flags]] {install,fit,validate,test,train,predict,export} ... │
│ │
│ │
│ Options: │
│ -h, --help Show this help message and exit. │
│ -c, --config CONFIG Path to a configuration file in json or yaml format. │
│ --print_config [=flags] │
│ Print the configuration after applying all other arguments and exit. The optional flags customizes the output and are one │
│ or more keywords separated by comma. The supported flags are: comments, skip_default, skip_null. │
│ │
│ Subcommands: │
│ For more details of each subcommand, add it as an argument followed by --help. │
│ │
│ │
│ Available subcommands: │
│ install Install the full-package for anomalib. │
│ fit Runs the full optimization routine. │
│ validate Perform one evaluation epoch over the validation set. │
│ test Perform one evaluation epoch over the test set. It's separated from fit to make sure you never run on your │
│ train Fit the model and then call test on the trained model. │
│ predict Run inference on a model. │
│ export Export the model to ONNX or OpenVINO format. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯