togethercomputer / RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Token counts

timsueberkrueb opened this issue · comments

Hey, thank you for making this data set available to the community.
I'm wondering how you estimated the token counts in the table in the README and the blogpost? In particular, do you have the corresponding numbers in bytes or Unicode codepoints?
Thanks a lot in advance.

Hi @timsueberkrueb -- we used the mistral-7B tokenizer and tokenized a subset of 100M documents. We then used these token counts to extrapolate to the full dataset. You can check out the code used to count tokens here: https://github.com/togethercomputer/RedPajama-Data/blob/main/app/src/token_count.py.

In particular, do you have the corresponding numbers in bytes or Unicode codepoints?

What do you mean by this? Are you referring to a specific tokenizer?

Thank you @mauriceweber!

What do you mean by this? Are you referring to a specific tokenizer?

I was wondering about the total amount of text data per language (excluding metadata etc), prior to tokenization.