CoderWZW / ARLib

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ARLib

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners. This repo is released with our survey paper on poisoning attack against recommender system.

Members:
Zongwei Wang, Chongqing University, China, zongwei@cqu.edu.cn
Hao Ma, Chongqing University, China, ma_hao@cqu.edu.cn
Chenyu Li, Chongqing University, chenyu_li@stu.cqu.edu.cn

Supported by:
Prof. Min Gao, Chongqing University, China, gaomin@cqu.edu.cn
ARC Training Centre for Information Resilience (CIRES), University of Queensland, Australia

Framework

Alt text

Usage

  1. Two configure files attack_parser.py and recommend_parser.py are in the directory named conf, and you can select and configure the recommendation model and attack model by modifying the configuration files.
  2. Run main.py.

Implemented Models

Recommend Model Paper
GMF Yehuda et al. Matrix Factorization Techniques for Recommender Systems, IEEE Computer'09.
WRMF Hu et al.Collaborative Filtering for Implicit Feedback Datasets, KDD'09.
NCF He et al. Neural Collaborative Filtering, WWW'17.
NGCF Wang et al. Neural Graph Collaborative Filtering, SIGIR'19.
LightGCN He et al. Lightgcn: Simplifying and powering graph convolution network for recommendation, SIGIR'2020.
SSL4Rec Yao et al. Self-supervised learning for large-scale item recommendations. CIKM'2021.
NCL Lin et al. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. WWW'2022.
SGL Wu et al. Self-supervised Graph Learning for Recommendation, SIGIR'21.
SimGCL Yu et al. Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation, SIGIR'22.
XSimGCL Yu et al. XSimGCL: Towards extremely simple graph contrastive learning for recommendation, TKDE'23.
Attack Model Paper Case
NoneAttack N/A Black
RandomAttack Lam et al. Shilling Recommender Systems for Fun and Profit. WWW'2004 Black
BandwagonAttack Gunes et al. Shilling Attacks against Recommender Systems: A Comprehensive Survey. Artif.Intell.Rev.'2014 Black
AUSH Lin C et al. Attacking recommender systems with augmented user profiles. CIKM'2020. Gray
LegUP Lin C et al. Shilling Black-Box Recommender Systems by Learning to Generate Fake User Profiles. IEEE Transactions on Neural Networks and Learning Systems'2022. Gray
GOAT Wu et al. Ready for emerging threats to recommender systems? A graph convolution-based generative shilling attack. Information Sciences'2021. Gray
FedRecAttack Rong et al. Fedrecattack: Model poisoning attack to federated recommendation. ICDE'2022. Gray
A_ra Rong et al. Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios. IJCAI'2022. Gray
PGA Li et al. Data poisoning attacks on factorization-based collaborative filtering. NIPS'2016. White
DL_Attack Huang et al. Data poisoning attacks to deep learning based recommender systems. arXiv'2021 White
PipAttack Zhang et al. Pipattack: Poisoning federated recommender systems for manipulating item promotion. WSDM'2022. Gray
RAPU Zhang et al. Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data. KDD'2021. White
PoisonRec Song et al. Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems. ICDE'2021. Black
CLeaR Wang et al. Poisoning Attacks Against Contrastive Recommender Systems. arXiv'2023 White
GTA Wang et al. Revisiting data poisoning attacks on deep learning based recommender systems. ISCC 2023 Black

Implement Your Model

Determine whether you want to implement the attack model or the recommendation model, and then add the file under the corresponding directory.

If you have an attack method, make sure:

  1. Whether you need information of the recommender model, and then set self.recommenderGradientRequired=True.
  2. Whether you need information of training recommender model, and then set self.recommenderModelRequired=True.
  3. Reimplement function posionDataAttack()

If you have a recommender method, reimplement the following functions:

  • init()
  • posionDataAttack()
  • save()
  • predict()
  • evaluate()
  • test()

Downlod Dataset

BAIDU DISK
Link: https://pan.baidu.com/s/1Gw0SI_GZsykPQEngiMvZgA?pwd=akgm
key: akgm

Google Drive
Link: https://drive.google.com/drive/folders/1QLDickAMEuhi8mUOyAa66dicCTd40CG5?usp=sharing

Requirements

base==1.0.4
numba==0.53.1
numpy==1.18.0
scipy==1.4.1
torch==1.7.1

Reference

If you find this repo helpful to your research, please cite our paper.

@article{wang2024poisoning,
  title={Poisoning Attacks against Recommender Systems: A Survey},
  author={Wang, Zongwei and Gao, Min and Yu, Junliang and Ma, Hao and Yin, Hongzhi and Sadiq, Shazia},
  journal={arXiv preprint arXiv:2401.01527},
  year={2024}
}

About

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners.


Languages

Language:Python 100.0%