AndreasLH / cookie_test

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ML ops integration exercises

Run tests

codecov

Code coverage

Grid

Flowers in Chania

Sunburst

Flowers in Chania

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.readthedocs.io

Project based on the cookiecutter data science project template. #cookiecutterdatascience

How to run scripts

to make dataset

make data

to run training

make train

to predict

make evaluate

to run experiments. This will sweep over parameters

make experiments

Docker

connect to remote

docker run --name experiment4 -v /${pwd}/models:/models/ trainer:latest train --epochs 2

for some reason this only works in powershell, like this

docker run --name experiment4 -v ${PWD}/models:/models/ trainer:latest train --epochs 2

prediction

docker run --name predict --rm -v ${PWD}/models/checkpoint.pth:/models/checkpoint.pth -v ${PWD}/data/interim/example_images.npz:/example_images.npz predict:latest evaluate models/checkpoint.pth data/interim/example_images.npz

About

License:MIT License


Languages

Language:Python 77.5%Language:Makefile 18.7%Language:Dockerfile 3.8%