TommyTang930 / dolma

Data and tools for generating and inspecting OLMo pre-training data.

Home Page:https://allenai.github.io/dolma/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background.

Dolma is two things:

  1. Dolma Dataset: an open dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials.
  2. Dolma Toolkit: a high-performance toolkit for curating datasets for language modeling.

Dolma Dataset

Dolma is an open dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It was created as a training corpus for OLMo, a language model from the Allen Institute for AI (AI2).

Dolma is available for download on the HuggingFace 🤗 Hub: huggingface.co/datasets/allenai/dolma. To access Dolma, users must agree to the terms of the terms of AI2 ImpACT License for Medium Risk Artifacts. Once agreed you can follow the instructions here to download it.

You can also read more about Dolma in our announcement, as well as by consulting its data sheet.

Dolma Toolkit

Dolma is a toolkit to curate large datasets for (pre)-training ML models. Its key features are:

  1. High Performance ⚡: Can process billions of documents concurrently thanks to built-in parallelism.
  2. Portabilty 🧳: Works on a single machine, a cluster, or cloud environment.
  3. Built-In Taggers 🏷: Includes ready-to-use taggers commonly used to curate datasets such as Gopher, C4, and OpenWebText.
  4. Fast Deduplication 🗑: Speedy document deduplication using a Rust Bloom filter.
  5. Extensibility 🧩 & Cloud Support ☁: Supports custom taggers and AWS S3-compatible locations.

To install, simply type pip install dolma in your terminal.

To learn more about how to use the Dolma Toolkit, please visit the documentation.

Citation

If you use the Dolma dataset or toolkit, please cite the following items:

@techreport{DolmaDataset,
    author = {Soldaini, Luca and Kinney, Rodney and Bhagia, Akshita and Schwenk, Dustin and Atkinson, David and Authur, Russell and Chandu, Khyathi and Dumas, Jennifer and Lucy, Li and Lyu, Xinxi and Magnusson, Ian and Naik, Aakanksha and Nam , Crystal and  Peters, Matthew E.  and Ravichander, Abhilasha and Shen, Zejiang and Strubell, Emma and Subramani, Nishant and Tafjord, Oyvind and Walsh, Evan Pete and Hajishirzi, Hannaneh and Smith, Noah A. and Zettlemoyer, Luke and Beltagy, Iz and Groeneveld, Dirk and Dodge, Jesse and Lo, Kyle},
    title = {{Dolma: An Open Corpus of 3 Trillion Tokens for Language Model Pretraining Research}},
    institution = {{Allen Institute for AI}},
    year = {2023},
    note = {Released under ImpACT License as Medium Risk artifact, \url{https://github.com/allenai/dolma}}
}
@software{DolmaToolkit,
    author = {{Soldaini, Luca and Lo, Kyle and Kinney, Rodney and Naik, Aakanksha and Ravichander, Abhilasha and Bhagia, Akshita and Groeneveld, Dirk and Schwenk, Dustin and Magnusson, Ian and Chandu, Khyathi}},
    title = {{The Dolma Toolkit}},
    year = {2023},
    note = {{Apache 2.0 License, Version \texttt{0.9.0}, \url{https://github.com/allenai/dolma}}}
}

About

Data and tools for generating and inspecting OLMo pre-training data.

https://allenai.github.io/dolma/

License:Apache License 2.0


Languages

Language:Python 84.0%Language:Rust 15.6%Language:Makefile 0.3%