NarcissusInMirror / DropoutNet

Code for the NeurIPS'17 paper "DropoutNet: Addressing Cold Start in Recommender Systems"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

NeurIPS'17 DropoutNet: Addressing Cold Start in Recommender Systems

Authors: Maksims Volkovs, Guangwei Yu, Tomi Poutanen
[paper]

Introduction

This repository contains full implementation of the DropoutNet model and includes both training and evaluation routines. We also provide the ACM RecSys 2017 Challenge dataset that we further split into three subsets for warm start, user cold start and item cold start evaluation. The aim is to train a single model that can be applied to all three tasks and we report validation accuracy on each task during training.

Furthermore per request, we also provide scripts and all necessary data to run the Citeulike cold-start experiment. See section on Citeulike below for further details as well as links to the packaged data.

Environment

The python code is developed and tested on the following environment:

  • python 2.7
  • tensorflow-gpu 1.3.0
  • Intel Xeon E5-2630
  • 128GB ram (around 30GB is required)
  • Titan X (Pascal) 12GB, driver ver. 384.81
  • CUDA 9 and CUDNN 7

Dataset

To run the model, download the dataset from here. With this dataset we have also included pre-trained Weighted Factorization model (WMF)[Hu et al., 2008], that is used as preference input to the DropoutNet. WMF produces competitive performance on warm start but doesn't generalize to cold start. So this code demonstrates how to apply DropoutNet to provide cold start capability to WMF. The format of the data is as follows:

interactions are stored in csv as: 互动的格式如下
  <USER_ID>,<ITEM_ID>,<INTERACTION_TYPE>,<TIMESTAMP>
where INTERACTION_TYPE is one of:
  0: impression
  1: click
  2: bookmark
  3: reply
  5: recruiter interest

recsys2017.pub				
└─ eval					// use path to this folder in --data-dir
   ├─ trained				// WMF model
   │  └─ warm				
   │     ├─ U.csv.bin			// numpy binarized WMF user preference latent vectors (U) 训练好的user隐向量 1497021 * 200
   │     └─ V.csv.bin			// numpy binarized WMF item preference latent vectors (V) 训练好的item隐向量 1306055 * 200
   ├─ warm				
   │  ├─ test_cold_item.csv		// validation interactions for item cold start 冷启动物品的验证互动情况  199028行
   │  ├─ test_cold_item_item_ids.csv	// targets item ids for item cold start 冷启动物品的id,每行只有一个id 49975行
   │  ├─ test_cold_user.csv    		// validation interactions for user cold start 冷启动用户的验证互动情况 169480行
   │  ├─ test_cold_user_item_ids.csv	// target user ids for user cold start 冷启动用户的id,每行只有一个id  42153行 根据eval loader中的代码,怀疑这里的格式是<ITEM_ID>,<USER_ID>,<INTERACTION_TYPE>,<TIMESTAMP>
   │  ├─ test_warm.csv			// validation interactions for warm start 热启动的验证互动情况 456121行
   │  ├─ test_warm_item_ids.csv		// target item ids for warm start 热启动的物品id 62435行
   │  └─ train.csv			// training interactions 用于训练的互动  19433737行
   ├─ item_features_0based.txt		// item features in libsvm format 1497021 * 831 
   └─ user_features_0based.txt		// user features in libsvm format 1306055 * 2738

libsvm使用的训练数据和检验数据文件格式如下:
[label] [index1]:[value1] [index2]:[value2] …
[label] [index1]:[value1] [index2]:[value2] 


Running training code

  1. Download the dataset, extract and keep the directory structure.

  2. run main.py

    • for usage, run with main.py --help
    • default setting trains a two layer neural network with hyperparameters selected for the RecSys data
    • gpu is used for training by default and cpu for inference
  3. (Optionally) launch tensorboard to monitor progress by tensorboard --logdir=<log_path>

During training recall@50,100,...,500 accuracy is shown every 50K updates for warm start, user cold start and item cold start validation sets.

Notes:

  • Make sure --data-dir points to the eval/ folder, not the root
  • On our environment (described above) 50K updates takes approximately 14 minutes with the default GPU/CPU setting.
  • By default, training happens on GPU while inference and batch generation is on CPU.

Validation Curves

Citeulike

In addition to Recsys, we also provide pipeline to run the publicly available Citeulike data. Note that, as mentioned in the paper, we evaluate cold start the same way as the CTR paper while the warm start evaluation is modified. For convenience, we have proivded our evaluation split for both cold and warm start, item features, as well as the WMF user item preference latent vectors available here.

The citeulike warm and cold models are trained separately as their validation sets differ. Please use the scripts main_cold_citeu.py and main_warm_citeu.py to run the experiments on the Citeulike dataset.

Point --data-dir to your extracted eval folder after extracting citeu.tar.gz. Sample training runs with respective validation performance are shown below per 1000 updates.

About

Code for the NeurIPS'17 paper "DropoutNet: Addressing Cold Start in Recommender Systems"

License:Other


Languages

Language:Python 100.0%