huggingface / datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Track proceessed input files to not re-process them when restarting

marianna13 opened this issue · comments

Hey datatrove team! I have one feature request:

Current state of things: Currently if you want to use SlurmExecutor to do some distributed processing (e.g. tokenization) for each task you will create a separate file and then if you need to restart you will skip the ranks that were already finshed (here).

Problem to be solved: The problem is when you have limuted number of tasks and large size of data to process (e.g. for specific cluster configation) and therefore you will have small number of files to write to. And if you have to restart your processing job, because you have small number of output files which were not completed yet, you will start all over again.

Suggested solution: Track all the processed input files and possibly the id of the last processed row (assuming you have a large input file) to restart from this point.

Please let me know what you think and would be happy to help implement this.

Hi, I am not sure this would be trivial as you'd also have to recheck what was written to each output file/when the buffer was last flushed and this would potentially break some processing where you actually need to process the entire data when resuming a failed job (for example, when computing the signatures of a file for deduplication).
Can you give some more details on your particular slurm limitations? Is there a limit on the size of a job array, nb of jobs running simultaneously or the actual total number of jobs (including those waiting) on the cluster? One possible workaround would be to make each slurm job in the array run multiple datatrove tasks, this way you could still have many total tasks (and thus small amount of data per task - better resuming) without increasing the total slurm job amount

Hey Guilherme,

Yes on my cluster I have a limit on how many jobs I can run in total therefore I encounter this issue while processing large datasets. How to make each slurm job datatrove to run multiple datatrove jobs? I tried specifying ntasks-per-node in sbatch args but I don't think it's a right logic (datatrove checks ARRAY slurm variables and doesn't care about number of tasks per node).

This option isn't present in the current code. I have added support for it here (untested): #153 let me know if it works for you/solves your problem

Hey Guilherme,
I tried tasks_per_job=10 (i.e. 1 nodes with 10 parallel datatrove tasks) but it makes the whole thing very slow (way slower than 1 task per job). Is there any reason why it might be the case I mean yeah it's expected it should 10 times slower bc we allocate 10 less resorces per task but it's rather x100 slower)?