In this notebook, I played around with the new CodeCarbon π¨ package integrated into Comet βοΈ using Huggingface π€ to show a fine-tuned language model's carbon footprint.
In 2019 the paper "Energy and Policy Considerations for Deep Learning in NLP" popped up, discussing machine learning models' carbon footprint. Giving this as a portion of food for thought, the community starts thinking about the long-term effects and consequences.
The CodeCarbon π¨ project is a software package to track the carbon footprint. This package is already integrated into Comet βοΈ , a tool to analyze and track your models (similar to wandb).
To exemplify the use of CodeCarbon π¨, I used a part of code from this HuggingFace' notebook to define a simple task for fine-tuning a language model (if you want, you can try out any other task).
Note: The current integration in HuggingFace seems to be a bit buggy in logging the experiments in the right format to get a carbon score.