google / seqio

Task-based datasets, preprocessing, and evaluation for sequence models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dataset seeking for restarting from a T5X crashed run using HuggingFace datasets

versae opened this issue · comments

Re-opening here as suggested by @adarob in google-research/t5x#421 (comment).

I wrote some hacky support for HuggingFace datasets using seqio.FunctionDataSource, specifically for pretraining and further pretraining models using T5X.

def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
    dataset = load_dataset(**dataset_params)
    if shuffle:
        if seed:
            dataset = dataset.shuffle(seed=seed)
        else:
            dataset = dataset.shuffle()
    while True:  # TODO: add for...loop over num_epochs
        for item in dataset[str(split)]:
            yield item[column]

def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
    return tf.data.Dataset.from_generator(
        functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
        output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
    )

dataset_name = 'NbAiLab/NCC'
dataset_params = {"path": dataset_name, "streaming": True}
dataset_shapes = {"train": 20830348, "validation": 473079}
source = seqio.FunctionDataSource(
    dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
    splits=("train", "validation"),
    caching_permitted=False,
    num_input_examples=dataset_shapes,
)

But unfortunately, as I face constant random crashes during training (google-research/t5x#366), I need a way to seek to the right dataset batch to properly continue training.

I see there's a continue_from_last_checkpoint variable in get_dataset(), bit it seems is not used for anything yet.

Is there a way to pass in the needed information to get_dataset_fn() so I can write the logic without using any hard-coded global variables?

@versae hey there, did you make any progress on this ? actually i too am facing issue in making seqio compatible with huggingface for pre-training LM's. if possiable could you also please send me alink to your HF pretraining script

Hi @StephennFernandes, no, not really 😢 Right now we have all our code here, but we do manual calculation (guesstimates) of how many samples skip at each restart.

Hey there, thanks a ton for replying. Sadly the repo you have linked isn't available. Apparently it's private. Could you please make it public and/or accessible to me please.

Sorry, can't. But here's a similar one I've also been working on: https://github.com/bertin-project/bertin-t5x

Hi, please take a look at https://github.com/google-research/t5x/blob/main/docs/usage/pretrain.md#deterministic-training-no-toc , this has instructions to make your data pipeline deterministic, i.e. reproducible and recoverable.

@gauravmishra , the seqio's dataset_fn returns return tf.data.Dataset.from_generator(...) but i needed to output from seqio to be compatible with huggingface's transformer training script is there a way to return it in any other format, thats compatible with huggingface's training. btw i would be making a mixture for multiple languages.

Hi Stephen, currently SeqIO only supports tf.data.Datasets as Task/Mixture outputs. The way to go would be to create a shim to convert tf.data.Datasets into a HF-compatible format (this may already exist, but I'm not sure).

@gauravmishra , thanks for replying. actually, i have built a hacky way to returning the output from seqio.get_mixture_or_task().get_dataset() as .as_numpy_iterator() which lets me have numpy values.

the follwoing is the code for the same.

import functools

import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils

TaskRegistry = seqio.TaskRegistry

DEFAULT_OUTPUT_FEATURES = {
    "inputs": seqio.Feature(
        vocabulary=t5.data.get_default_vocabulary(), add_eos=True,
        required=False),
    "targets": seqio.Feature(
        vocabulary=t5.data.get_default_vocabulary(), add_eos=True)
}


def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
    dataset = load_dataset(**dataset_params)
    if shuffle:
        if seed:
            dataset = dataset.shuffle(seed=seed)
        else:
            dataset = dataset.shuffle()
    while True:
        for item in dataset[str(split)]:
            yield item[column]


def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
    return tf.data.Dataset.from_generator(
        functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
        output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
    )


@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
    """Assign the value from the dataset to target_key in key_map"""
    return {**key_map, target_key: x}



dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None

TaskRegistry.add(
    "oscar_marathi_corpus",
    source=seqio.FunctionDataSource(
        dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
        splits=("train", "validation"),
        caching_permitted=False,
        num_input_examples=dataset_shapes,
    ),
    preprocessors=[
        functools.partial(
            target_to_key, key_map={
                "inputs": None,
                "targets": None,
            }, target_key="targets"),
        seqio.preprocessors.tokenize,
        # seqio.CacheDatasetPlaceholder(),
        preprocessors.span_corruption,
        seqio.preprocessors.append_eos_after_trim,
    ],
    output_features={"targets": seqio.Feature(vocabulary=t5.data.get_default_vocabulary(), add_eos=True)},
    metric_fns=[]
)

dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
    sequence_length={"inputs": 512, "targets": 512},
    split="train",
    shuffle=True,
    num_epochs=1,
    use_cached=False,
    seed=42
)
for _, ex in zip(range(5), dataset.as_numpy_iterator()):
  print(ex) 

but the thing is it returns values as input IDs after the preprocessing done on the dataset.
But the Huggingface T5 trainer does take care of all the preprocessing and other steps needed.

I actually need the output in actual raw text string format. which i could then use to preprocess in the huggingface training script.
I only need to use the mixture functionality from seqio and avoiding all the preprocessing, tokenization etc.

In summary i only need a way to feed in raw text samples from multiple langugaes, use the mixture from seqio and get back an iterator thats outputs samples which are mixture of all the languages. (in raw text form)

is there a way of actually obtaining that ?

if not, then do you know of any way i could obtain the Mixture Functionality without using seqio ?