AvinashGupta / snips-nlu-metrics

Python package to compute metrics on an NLU intent parsing pipeline

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Snips NLU Metrics

https://travis-ci.org/snipsco/snips-nlu-metrics.svg?branch=master https://img.shields.io/pypi/v/snips-nlu-metrics.svg?branch=master https://img.shields.io/pypi/pyversions/snips-nlu-metrics.svg?branch=master

This tools is a python library for computing cross-validation and train/test metrics on an NLU parsing pipeline such as the Snips NLU one.

Its purpose is to help evaluating and iterating on the tested intent parsing pipeline.

Install

$ pip install snips_nlu_metrics

NLU Metrics API

Snips NLU metrics API consists in the following functions:

The metrics output (json) provides detailed information about:

Data

Some sample datasets, that can be used to compute metrics, are available here. Alternatively, you can create your own dataset either by using snips-nlu's dataset generation tool or by going on the Snips console.

Examples

The Snips NLU metrics library can be used with any NLU pipeline which satisfies the Engine API:

from builtins import object

class Engine(object):
    def fit(self, dataset):
        # Perform training ...
        return self

    def parse(self, text):
        # extract intent and slots ...
        return {
            "input": text,
            "intent": {
                "intentName": intent_name,
                "probability": probability
            },
            "slots": slots
        }

Snips NLU Engine

This library can be used to benchmark NLU solutions such as Snips NLU. To install the snips-nlu python library, and fetch the language resources for english, run the following commands:

$ pip install snips-nlu
$ snips-nlu download en

Then, you can compute metrics for the snips-nlu pipeline using the metrics API as follows:

from snips_nlu import SnipsNLUEngine
from snips_nlu_metrics import compute_train_test_metrics, compute_cross_val_metrics

tt_metrics = compute_train_test_metrics(train_dataset="samples/train_dataset.json",
                                        test_dataset="samples/test_dataset.json",
                                        engine_class=SnipsNLUEngine)

cv_metrics = compute_cross_val_metrics(dataset="samples/cross_val_dataset.json",
                                       engine_class=SnipsNLUEngine,
                                       nb_folds=5)

Custom NLU Engine

You can also compute metrics on a custom NLU engine, here is a simple example:

import random

from snips_nlu_metrics import compute_train_test_metrics

class MyNLUEngine(object):
    def fit(self, dataset):
        self.intent_list = list(dataset["intents"])
        return self

    def parse(self, text):
        return {
            "input": text,
            "intent": {
                "intentName": random.choice(self.intent_list),
                "probability": 0.5
            },
            "slots": []
        }

compute_train_test_metrics(train_dataset="samples/train_dataset.json",
                           test_dataset="samples/test_dataset.json",
                           engine_class=MyNLUEngine)

Links

Contributing

Please see the Contribution Guidelines.

Copyright

This library is provided by Snips as Open Source software. See LICENSE for more information.

About

Python package to compute metrics on an NLU intent parsing pipeline

License:Apache License 2.0


Languages

Language:Python 100.0%