anagri / esci-code

Open source implementation of the baselines presented in the Amazon Product Search KDD CUP 2022.

Home Page:https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ESCI Challenge for Improving Product Search - KDD CUP 2022: Baselines

This is an open source implementation of the baselines presented in the Amazon Product Search KDD CUP 2022.

Requirements

We launched the baselines experiments creating an environment with Python 3.6 and installing the packages dependencies shown below:

aicrowd-cli==0.1.15
numpy==1.19.2
pandas==1.1.5
torch==1.7.1
transformers==4.16.2
scikit-learn==0.24.1
sentence-transformers==2.1.0

For installing the dependencies we can launch the following command:

pip install requirements.txt

Download data

Before to launch the script below, it would be necessary to login in aicrowd using the Python client aicrowd login.

The script below downloads all the files for the three tasks using the aicrowd client.

cd data/
./download-data.sh

Reproduce published results

For a task K, we provide the same scripts, one for training the model (and preprocessing the data for tasks 2 and 3): launch-experiments-taskK.sh; and a second script for getting the predictions for the public test set using the model trained on the previous step: launch-predictions-taskK.sh.

Task 1 - Query Product Ranking

For task 1, we fine-tuned 3 models one for each query_locale.

For us locacale we fine-tuned MS MARCO Cross-Encoders. For es and jp locales multilingual MPNet. We used the query and title of the product as input for these models.

cd ranking/
./launch-experiments-task1.sh
./launch-predictions-task1.sh

Task 2 - Multiclass Product Classification

For task 2, we trained a Multilayer perceptron (MLP) classifier whose input is the concatenation of the representations provided by BERT multilingual base for the query and title of the product.

cd classification_identification/
./launch-experiments-task2.sh
./launch-predictions-task2.sh

Task 3 - Product Substitute Identification

For task 3, we followed the same approach as in task 2.

cd classification_identification/
./launch-experiments-task3.sh
./launch-predictions-task3.sh

Results

The following table shows the baseline results obtained through the different public tests of the three tasks.

Task Metric Score
1 nDCG 0.852
2 Micro F1 0.655
3 Micro F1 0.780

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

Open source implementation of the baselines presented in the Amazon Product Search KDD CUP 2022.

https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search

License:Apache License 2.0


Languages

Language:Python 73.2%Language:Shell 26.8%