dcai-course / dcai-lab

Lab assignments for Introduction to Data-Centric AI, MIT IAP 2024 👩🏽‍💻

Home Page:https://dcai.csail.mit.edu/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Lab assignments for Introduction to Data-Centric AI

This repository contains the lab assignments for the Introduction to Data-Centric AI class.

Contributions are most welcome! If you have ideas for improving the labs, please open an issue or submit a pull request.

If you're looking for the 2023 version of the labs, check out the 2023 branch.

The first lab assignment walks you through an ML task of building a text classifier, and illustrates the power (and often simplicity) of data-centric approaches.

This lab guides you through writing your own implementation of automatic label error identification using Confident Learning, the technique taught in today’s lecture.

This lab assignment is to analyze an already collected dataset labeled by multiple annotators.

This lab assignment is to try improving the performance of a given model solely by improving its training data via some of the various strategies covered here.

The lab assignment for this lecture is to implement and compare different methods for identifying outliers. For this lab, we've focused on anomaly detection. You are given a clean training dataset consisting of many pictures of dogs, and an evaluation dataset that contains outliers (non-dogs). Your task is to implement and compare various methods for detecting these outliers. You may implement some of the ideas presented in today's lecture, or you can look up other outlier detection algorithms in the linked references or online.

This lab guides you through an implementation of active learning.

This lab guides you through finding issues in a dataset’s features by applying interpretability techniques.

[This lab] guides you through prompt engineering, crafting inputs for large language models (LLMs). With these large pre-trained models, even small amounts of data can make them very useful. This lab is also available on Colab.

The lab assignment for this lecture is to implement a membership inference attack. You are given a trained machine learning model, available as a black-box prediction function. Your task is to devise a method to determine whether or not a given data point was in the training set of this model. You may implement some of the ideas presented in today’s lecture, or you can look up other membership inference attack algorithms.

License

Copyright (c) by the instructors of Introduction to Data-Centric AI (dcai.csail.mit.edu).

dcai-lab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

dcai-lab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

See GNU Affero General Public LICENSE for details.

About

Lab assignments for Introduction to Data-Centric AI, MIT IAP 2024 👩🏽‍💻

https://dcai.csail.mit.edu/

License:GNU Affero General Public License v3.0


Languages

Language:Jupyter Notebook 100.0%Language:Python 0.0%