StfBlanchet / MagiCl

A binary classifier that runs from preprocessing to modeling and classification, and automates benchmarking of multiple classifiers to choose the best.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MagiClass

Table of contents

General information

MagiClass is a Machine Learning Flask app that performs preprocessing, modeling and text classification tasks so to deliver a tagged dataset via its API.

The current version is focused on English corpora and binary classification (e.g. spam vs. not spam).

Routes

MagiClass offers 3 sub-endpoints which need a directory name and a file name to be specified:

  • .../preprocess/{directory}/{filename}
    the name of the folder followed by the name of the csv file to be preprocessed.
  • .../model/{directory}/{filename}
    the name of the folder followed by the name of the clean json file to be split as train/test dataset.
  • .../classify/{directory}/{filename}
    the name of the folder followed by the name of clean json the file to be tagged.

Modules

1. Preprocessing

This first task allows two types of preprocessing through the parameter 'pipe':

  • 'min': .../preprocess/{directory}/{filename}?pipe=min
    cleaning (i.e. remove urls, emails, phone numbers, special characters) and language detection only
  • 'max': .../preprocess/{directory}/{filename}?pipe=max
    metadata extraction (urls, emails, phone numbers), emphasis markers (series of capital letters, exclamation and question marks) performed before cleaning and language detection so to allow further analysis

Default: 'min'.

Output: json ; also save a json file (clean_{filename}.json) which is stored in 'data/{directory}/dataset/'.

2. Modeling

This second task includes mandatory parameters:

  • 'target': the name of the field that contains the dependant variable - that is Y
    E.g. .../model/spam/train?target=category
  • 'target_value': the value to be encoded 1
    E.g. .../model/spam/train?target=category&target_value=spam
  • 'factor': the name of the field that contains the independant variable - that is X
    E.g. .../model/spam/train?target=category&target_value=spam&factor=text

Customizable parameters are listed below:

  • 'narrow_lang': reduce the dataset to a specific language only, depending on the 'lang' field generated through the preprocessing phase
    E.g. .../model/spam/train?narrow_lang=en
    Default: 'none'
  • 'lemmatizer': if True, lemmatize the text designated as factor using spaCy (En)
    Default: False
  • 'vectorizer': choose between TfidfVectorizer ['tfidf'] and CountVectorizer ['bow'] to be included in the modeling pipeline
    E.g. .../model/spam/train?vectorizer=bow
    Default: 'tfidf'
  • 'resampler': if True, resample the dataset in case the target distribution (sum of target value / sum of not target value) is less than 0.85 or more than 1.15
    Default: True
  • 'classifiers': define one ore more classifiers to be run - available are Naive Bayes algorithms: Multinomial ('MBN'), Bernoulli ('BNB') and Complement ('CNB')
    E.g. .../model/spam/train?classifiers=BNB+MNB
    Default: 'BNB+CNB+MNB'
  • 'mode': if 'single', one classifier has to be specified in the 'classifiers' parameter so the model can be saved with pickle in 'data/{directory}/model/'; otherwise, the listed classifiers are compared across various metrics
    Default: 'benchmark'
  • 'metric': define the core metric to select the best model among 'accuracy', 'roc_auc_score', 'f1', 'precision' and 'recall'
    Default: 'f1' (especially relevant in a binary classification problem)

Output: json ; also save a json file (benchmark.json) which is stored in 'data/{directory}/model/'.

The output provides useful information to evaluate models as well as the quality of the dataset. In addition to the metrics mentioned above, there are given the train accuracy compared with the test accuracy as an overfitting indicator, the size of the train dataset, and the target ratio as an imbalance indicator.

Hyperparameters for Naive Bayes algorithms are automatically tuned using GridSearchCV.

3. Classification

This final task delivers the tagged dataset via .../classify/{directory}/{filename}. There is no need to define paramaters: the ones used in the modeling phase are automatically retrieved from the 'benchmark' json file.

Output: json

Technical requirements

The application is built with Python 3.8 and Flask 1.1.2.

Libraries include Pandas, LangDetect, spaCy, Scikit-Learn.

pip3 install -r requirements.txt

Status

This project is in progress.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Author

License

This project is licensed under the MIT License - see MIT for details.

About

A binary classifier that runs from preprocessing to modeling and classification, and automates benchmarking of multiple classifiers to choose the best.


Languages

Language:Python 78.2%Language:HTML 21.8%