demoleiwang / LLMSRec_Syn

Code for Our NAACL Findings 2024 paper "The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation"

Home Page:https://arxiv.org/abs/2403.10135

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLMSRec_Syn (Updating)

Code for Our NAACL Findings 2024 paper "The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation"

πŸš€ Quick Start

  1. Write your own OpenAI API keys into LLMSRec_Syn/openai_api.yaml.
  2. Unzip dataset files.
    cd LLMSRec_Syn/dataset/ml-1m/; unzip ml-1m.inter.zip
    cd LLMSRec_Syn/dataset/Games/; unzip Games.inter.zip
    For data preparation details, please refer to LLMRank's [data-preparation].
  3. Install dependencies.
    pip install -r requirements.txt
  4. Evaluate ChatGPT's zero-shot ranking abilities on ML-1M dataset.
    cd LLMSRec_Syn/
    python evaluate.py -m Rank_Aggregated(ours)/Rank_Nearest/Rank_Fiexed -d ML-1M

🌟 Cite Us

Please cite the following paper if you find our code helpful.

The experiments are conducted using the open-source recommendation library RecBole and LLMRank.

About

Code for Our NAACL Findings 2024 paper "The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation"

https://arxiv.org/abs/2403.10135

License:MIT License


Languages

Language:Python 100.0%