yiweitiming / DeepEnsemble

Build deep ensemble learning models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DeepEnsemble

I brought the VotingClassifier and BaggingClassifier from sklearn to deep networks such as CNNs, ANNs, RNNs and etc to provide the capability of training deep networks by ensembling.The repo consists 2 separate files
singular_ensemble.py
customized_ensemble.py

Main steps

step 1

Clone the repo using the below command
git clone https://github.com/Moeed1mdnzh/DeepEnsemble.git
or just simply download the zipped repo

step 2

Install the required packages

pip install -r requirements.txt 

singular_ensemble.py

The point of this file is to build ensembles of only a unique architecture.
Let's find out how you can use it in your very own models.

step 1

Put the file in same directory as your own files and import the ensembling-object in your main file.

from singular_ensemble import SingularEnsemble

step 2

Build the ensembling-object based on your own conditions

base = SingularEnsemble(layers, classes, n_estimators, voting = "hard", sampling = False, bootstrap = True, verbose = 1, save_to = False)

1-layers : Must be a list which containes all the layers of your model
2-classes : The number of your data classes
3-n_estimators : The number of the estimators to be created for your model
4-voting : The type of voting for making predictions(The same argument in sklearn.ensemble.VotingClassifer)
5-sampling : Wether the model should perform bagging-pasting
6-bootstrap : Wether add replacement to sampling(Only works if sampling is set to true)
7-verbose : Clears the screen after the given number of fits
8-save_to : Supposed to be the path of saving the model.If given a string,the model will be saved there otherwise leave it

step 3

Compile the model by passing a dictionary as the parameters.For instance

base.compile({"loss": "binary_crossentropy", "optimizer": "adam", "metrics": ["accuracy"]})

Fit the model in the same way.

base.fit({"x": X_train, "y": y_train, "batch_size": 32, "epochs": 10, "validation_data": (X_test, y_test)})

After fitting the model you should see two new files named model.h5 and var.txt if save_to in the ensembling-object is set to a path.
model.h5 is the saved model and var.txt contains some important variables so don't touch them.

step 4

Connect all models into one and save it(if save_to is set to a path) by typing

base.aggregate()

step 5

Load And Predict

from singular_ensemble import SingularEnsemble as base
model, extra = base.load(PATH)
prediction = base.predict(model, sample, extra)

Note : PATH must be the directory in which the model was saved, for instance : "my_models/cnn/"
Warning : Do not attempt to change the name of the model otherwise the it can't be loaded properly

step 6(optional)

Evaluate the model

preformance = base.evaluate(model, X_test, y_test, extra, batch_size)

customized_ensemble.py

The point of this file is to build ensembles of several individual architectures.
Let's also see how this one works.How exciting!

step 1

Put the file in same directory as your own files and import the ensembling-object in your main file.

from customized_ensemble import CustomizedEnsemble

step 2

Build the ensembling-object based on your own conditions

base = CustomizedEnsemble(models, voting = "hard", sampling = False, bootstrap = True, verbose = 1, save_to = False)

1-models : Must be a list which containes all the various models
2-voting : The type of voting for making predictions(The same argument in sklearn.ensemble.VotingClassifer)
3-sampling : Wether the model should perform bagging-pasting
4-bootstrap : Wether add replacement to sampling(Only works if sampling is set to true)
5-verbose : Clears the screen after the given number of fits
6-save_to : Supposed to be the path of saving the model.If given a string,the model will be saved there otherwise leave it

step 3

Compile the model by passing a list which contains all the parameters for each model as a dictionary.For instance

base.compile([{"loss": "binary_crossentropy", "optimizer": "adam", "metrics": ["accuracy"]},
              {"loss": "binary_crossentropy", "optimizer": "rmsprop", "metrics": ["accuracy"]},
              {"loss": "binary_crossentropy", "optimizer": "sgd", "metrics": ["accuracy"]}])

Fit the model in the same way.

base.fit([{"x": X_train, "y": y_train, "batch_size": 32, "epochs":10, "validation_data": (X_test, y_test)},
          {"x": X_train, "y": y_train, "batch_size": 32, "epochs":10, "validation_data": (X_test, y_test)},
          {"x": X_train, "y": y_train, "batch_size": 32, "epochs":5, "validation_data": (X_test, y_test)}])

After fitting the model you should see two new files named model.h5 and var.txt if save_to in the ensembling-object is set to a path.
model.h5 is the saved model and var.txt contains some important variables so don't touch them.

step 4

Connect all models into one and save it(if save_to is set to a path) by typing

base.aggregate()

step 5

Load And Predict except we have to do an extra thing which is to decompose the model back to separated models.

from customized_ensemble import CustomizedEnsemble as base
model, extra = base.load(PATH)
models = base.decompose(model, extra)
prediction = base.predict(models, sample, extra)

Note : PATH must be the directory in which the model was saved, for instance : "my_models/cnn/"
Warning : Do not attempt to change the name of the model otherwise the it can't be loaded properly

step 6(optional)

Evaluate the model.

performance = base.evaluate(models, X_test, y_test, extra, batch_size)

Hopefully you succeeded at creating your own ensembling model :)

About

Build deep ensemble learning models

License:MIT License


Languages

Language:Python 100.0%