advboxes / AdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EOT Parameters

kqvd opened this issue · comments

commented

Hi again,

From what I understand in the transformation.py file, the calibration.jpg performs 94 transformations in imshow. What does the calibration.xml file do which contains the box coordinates for the JPG image?

I'm also using a surveillance camera for my project. Are there any variables I should modify to accommodate EOT?

The calibration.xml contains coordinates of the white paper in the JPG image. It is used to adjust the Affine transformation setting to make sure they are in a proper range.
Empirically, the closer the Affine transformation setting is to the real-life expectation of transformation, the better the sticker generalization result we can get.

commented

Thanks for information. Lastly, I have one more query about ODD.

The build graph has additional variables defined after layer 19. I noticed the class and scores are defined from YOLO1's output shape of [1, 1470]. For my case, I've built a TensorFlow graph for YOLOv2 Tiny COCO which has a different array output of [1, 71825] - its actual tensor output is (1, 13, 13, 425). Is there a way to define my class and object score variables as a list in the graph?

I also want to understand what p1, p2 mean so I know what I need to multiply (where does the c[:,:,:,14] and s[:,:,:,0], s[:,:,:,1] come from?). I'm looking forward to getting this working and experimenting the adversarial patches

Screenshot 2021-05-12 181229

Hello.

This source code we used might help you https://github.com/gliese581gg/YOLO_tensorflow/blob/master/YOLO_tiny_tf.py

c[:,:,:,14] and s[:,:,:,0] etc, is the tensor for class "person". For your case, you have to check the original paper that explains how bounding box condinates, class confidence, and etc information are embedded, regression trained and NMS? into the object detector model. Usually, object detectors' implementations do not provide a handy API for hacking, so we have to interpret and modify them into a designed loss function.

commented

I've managed to interpret the post-processing part of the object detector model, but will need to look into loss function, so I thank you again for explaining it clearly.

Going back to the transformation.py (sorry for the barrage of questions), there's a camera coordinate system.
Is Z (2000) the distance in metres? Also, what does X (92.3), Y (135.8) mean? Is there a reference paper on this topic? Finally, for
SCALE = HALF_SZ / 1512, is the 1512 referring to the half width or height of image?

X (92.3), Y (135.8); and 1512 are the pixel coordinates of the white paper corners for the picture taken. 1512 should be the full width and height of the picture.
Reference for The Affine Transform Matrix.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Euler_angles
http://mathworld.wolfram.com/EulerAngles.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.5134