lea4n / AdversarialModel

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

invisible data poisonning

WARNING

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

The code is provided for research only

This code use third party: see the downloaded file!

From theoretic point of view, this code should be fully reproducible.

Yet, it is probably not cross plateform - it has probably hidden dependencies - and it may varies depending on software version, nvidia driver and nvidia hardware.

If you do not manadge to reproduce the experiment, feel free to ask me about more detail.

I will try to answer all requests depending on my available time.

DESCRIPTION

This code highlights the possibility to produce an invisible data poisoning: strongly modify the model (resulting from a fair learning) by adding unperceptible perturbation to training samples.

About


Languages

Language:Python 61.2%Language:Shell 38.8%