Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using Pre Processors in Prediction (ART classifier)

RoeyBokobza opened this issue · comments

As a user, you would anticipate that adding a preprocessor defense to the estimator's 'preprocessing_defences' list would automatically activate it on the input inside the predict function.

Instead, the user must explicitly activate any of those defenses before sending the result to the predict function. This entire operation renders the attribute 'preprocessing_defences' obsolete.

Implementing the 'forward' function for this defense, and adding the defense instance to the 'preprocessing_operations' list are prerequisites for using the current implementation of the 'self._apply_preprocessing' function. Here is an example of the 'self._apply_preprocessing' function in Estimator.py file:
image

In the case of postprocessors, everything is as a user would anticipate. All post processors in the list of 'postprocessing_defeces' attribute are automatically activated as part of the predict function of the estimator. Here is an example from the Estimator.py file :
image

For now, adding a similar piece of code inside the 'self._apply_preprocessing' method is a straightforward workaround for it, as demonstrated in this little example:
image

Hi @RoeyBokobza Thank you for your interest in ART! Have you observed that pre-processing steps are not being applied in your experiments?
The code for pre- and post-processing are slightly different because we have to add the pre-processing for normalisation into sequence with the pre-processing defences and have therefore renamed the combined list to self.preprocessing_operations as seen in your first screenshot above.

Hi @RoeyBokobza Thank you for your interest in ART! Have you observed that pre-processing steps are not being applied in your experiments? The code for pre- and post-processing are slightly different because we have to add the pre-processing for normalisation into sequence with the pre-processing defences and have therefore renamed the combined list to self.preprocessing_operations as seen in your first screenshot above.

Hey, thank you for responding!
I wanted the option to activate different chains of pre-processors during each run as part of my experiments. This is why I expected there to be a function to which I could simply pass pre-processor instances, and it would perform the rest automatically. When I realized that this was not the case, I wanted to issue it, so you can tell me if I missed something, or perhaps this need is not addressed.