vortico / flama

Fire up your models with the flame 🔥

Home Page:https://flama.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CLI command to interact with an ML model without server

perdy opened this issue · comments

It could be a good addition to Flama CLI a command for letting the user to interact directly with an ML model without the need to spin up a server. This command should expose the same API available through HTTP, but in a command-line fashion, currently:

  • A command for inspecting the model.
  • A way for make predictions using the model.

The help of the main command should be something like:

Usage: flama model [OPTIONS] MODEL_PATH COMMAND [ARGS]...

  Interact with an ML model without server.

  This command is used to directly interact with an ML model without the need
  of a server. This command can be used to perform any operation that is
  supported by the model, such as inspect, or predict. <FLAMA_MODEL_PATH> is
  the path of the model to be used, e.g. 'path/to/model.flm'. This can be
  passed directly as argument of the command line, or by environment variable.

Options:
  --help  Show this message and exit.

Commands:
  inspect  Inspect an ML model.
  predict  Make a prediction using an ML model.

For inspect subcommand:

Usage: flama model MODEL_PATH inspect [OPTIONS]

  Inspect an ML model.

  This command is used to inspect an ML model without the need of a server.
  This command can be used to extract the ML model metadata, including the ID,
  time when the model was created, information of the framework, and the model
  info; and the list of artifacts packaged with the model.

Options:
  -p, --pretty  Pretty print the model inspection.
  --help        Show this message and exit.

In the case of predict subcommand:

Usage: flama model MODEL_PATH predict [OPTIONS]

  Make a prediction using an ML model.

  This command is used to make a prediction using an ML model without the need
  of a server. It can be used for batch predictions, so both input and output
  arguments must be json files containing a list of input values, each input
  value being a list of values associated to the input of the model. The
  output will be the list of predictions associated to the input, with each
  prediction being a list of values representing the output of the model.

  Example:

  - input.json: [[0, 0], [0, 1], [1, 0], [1, 1]]

  - output.json: [[0], [1], [1], [0]]

Options:
  -f, --file FILENAME    File to be used as input for the model prediction in
                         JSON format. (default: stdin).
  -o, --output FILENAME  File to be used as output for the model prediction in
                         JSON format. (default: stdout).
  -p, --pretty           Pretty print the model inspection.
  --help                 Show this message and exit.