Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Backdoor attack HuggingFace Model Automatic Speech Recognition via HuggingFaceClassifierPytorch ART

OrsonTyphanel93 opened this issue · comments

Hello(s) Dear, @f4str , @GiulioZizzo , @beat-buesser ! is it possible to dynamically parameterize the face of the classifier HuggingFaceClassifierPyTorch otherwise, it doesn't seem as dynamic as the other ART classifiers because it has very fixed channels! I'd like to use this classifier to launch backdoor attacks on Hubert models, Wav2Vec2 etc.,

I've already managed to poison them (see attached image Wav2vec2 model), now I'd like to train them on this poisoned data, but I'm having problems with reshaping the data to fit the classifier, see attached error.

print(x_train.shape, "shape") 

(2500, 124, 129, 1) shape

print(y_train.shape, "shape")

(2500,) shape



# Load the Wav2Vec2 processor and model
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from art.estimators.classification.hugging_face import HuggingFaceClassifierPyTorch

input_shape=(124, 129, 1)
num_labels=len(commands)

model = transformers.AutoModelForAudioClassification.from_pretrained(
    'facebook/wav2vec2-base-960h',
    ignore_mismatched_sizes=True,
    num_labels=num_labels
)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = torch.nn.CrossEntropyLoss()

hf_model = HuggingFaceClassifierPyTorch(
    model=model,
    loss=loss_fn,
    optimizer=optimizer,
    input_shape=(124, 129, 1),
    nb_classes=num_labels,
    clip_values=(0, 1),
)
[<ipython-input-23-80daec27e4d9>](https://localhost:8080/#) in <cell line: 15>()
     13 loss_fn = torch.nn.CrossEntropyLoss()
     14 
---> 15 hf_model = HuggingFaceClassifierPyTorch(
     16     model=model,
     17     loss=loss_fn,



18 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias)
    304                             weight, bias, self.stride,
    305                             _single(0), self.dilation, self.groups)
--> 306         return F.conv1d(input, weight, bias, self.stride,
    307                         self.padding, self.dilation, self.groups)
    308 


RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 1, 124, 129, 1] ```

(1)fig_plot_audio_comparison
(2)fig_plot_audio_comparison

In my case, for example, I'm using Hugging Face's Wav2Vec2ForCTC template, which expects input in a specific format. However, I'm providing input with a shape (124, 129, 1) that matches my data, which I think is causing the mismatch.

To solve this problem, I guess I need to adjust the shape of the input to match what the model expects. According to the Hugging Face documentation, the Wav2Vec2ForCTC model expects input in the form of (batch_size, sequence_length) I've already tried this but still get the same error.

Hi guys, thanks! I just had to customize this ART classifier and transpose my data to 3 channels.

@i'll be making the notebook public soon, just reorganize it .... : )

Hi @OrsonTyphanel93,

Thanks for bringing this up!

So HuggingFaceClassifierPyTorch will try and perform a forward pass to determine the model structure - something that is often needed for poisoning attacks and defences. To do so a dummy input sample will be created based on the supplied input_shape. In which case, if the model expects 1D inputs the input_shape should just be the sequence length: i.e. without any batching.

The code snippet below should work.

What's the motivation in your case which requires the input_shape to be different compared to the standard Wav2Vec2ForCTC format? Perhaps it is a use-case we have not considered and we can look into supporting it.

model = transformers.AutoModelForAudioClassification.from_pretrained(
        'facebook/wav2vec2-base-960h',
        ignore_mismatched_sizes=True,
        num_labels=2
    )

input_values = np.random.normal(size=1000)
_ = HuggingFaceClassifierPyTorch(
    model=model,
    loss=torch.nn.CrossEntropyLoss(),
    optimizer=torch.optim.Adam(model.parameters(), lr=1e-4),
    input_shape=input_values.shape,
    nb_classes=2,
)

notebook HugginFace Backdoor link HugginFace Backdoor attack

hi guys @beat-buesser ! here is the final notebook you can now test it with codecov please , i think it has a very fast optimization ,

I've tested it with all the audio models available on HugginFace, and they've all been 'backdoored'! as far as I know, you can keep the classifier as it is. I've customized the classifier in this code so that users who play with audio data won't have any trouble using your classifier.

thanks again guys!

Thank you very much Dear @GiulioZizzo ! , for your intervention!

Some particular requirements, such as the following, may be the reason for HuggingFaceClassifierPyTorch to specify an alternate input format instead of the normal Wav2Vec2ForCTC format:
Individual audio sequence processing: Rather than combining similar audio sequences, the model might be built to handle each one separately. Applications such as processing audio fragments of varying lengths or real-time speech recognition may benefit from this.
Specific procedures or layers in the model may call for a particular type of 1D input due to its specialized architecture)