muandet-lab / dgil

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Domain Generalisation via Imprecise Learning

This repository contains the code base for the paper "Domain Generalization via Imprecsie Learning" accepted at ICML 2024.

Table of Contents

Overview

The paper presents a novel approach to domain generalization by contending that domain generalisation encompasses both statistical learning and decision-making. Thus without knowledge of the model operator's notion of generalisation, learners are compelled to make normative judgments. This leads to misalignment amidst institutional separation from model operators. Leveraging imprecise probability, our proposal, Imprecise Domain Generalisation, allows learners to embrace imprecision during training and empowers model operators to make informed decisions during deployment.

Overview of DGIL

Installation

To install the necessary dependencies, please follow these steps:

  1. Clone the repository:

    git clone https://github.com/muandet-lab/dgil.git
    cd dgil
  2. Create a virtual environment:

    python -m venv dgil_env
    source dgil_env/bin/activate  # On Windows, use `dgil_env\Scripts\activate`
  3. Install the required packages from file inside CMNIST folder:

    pip install -r requirements.txt

Usage

To run the experiments, you can use the provided scripts. Below is an example of how to train and evaluate the model on a specific dataset:

  1. Train the model:

    python train_sandbox.py --config configs/experiment_config.yaml
  2. Evaluate the model:

    python evaluate.py --config configs/experiment_config.yaml --checkpoint path/to/checkpoint.pth

Datasets

The codebase supports benchmarking on CMNSIT and some more simulations.

Contributing

We welcome contributions from the community. If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request.

Results

We performed experiment on modified version of CMNIST where we design our experimental setup as shown below

Overview of DGIL

While average case learners and worst case learners perform well for majority and minority environments respectively, DGIL obtains the lowest regret across environments.

Lowest Regret

Citation

If you use this code in your research, please cite our paper:

@inproceedings{singh2024domain,
  title={Domain Generalisation via Imprecise Learning},
  author={Singh, Anurag and Chau, Siu Lun and Bouabid, Shahine and Muandet, Krikamol},
  booktitle={Proceedings of the International Conference on Machine Learning (ICML)},
  year={2024}
}

Contact

For any questions or inquiries, please contact the authors:

About


Languages

Language:Jupyter Notebook 99.1%Language:Python 0.9%