mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks

Home Page:https://mlcommons.org/en/groups/inference

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CNNDM download failing for Nvidia v3.1 scripts

arjunsuresh opened this issue · comments

Inside the Nvidia Docker container BENCHMARKS=gptj make download_data is failing as shown below.

(mlperf) arjun@mlperf-inference-arjun-x86-64-411:/work$ BENCHMARKS=gptj make download_data
Inside container, start downloading...
Downloading and preparing dataset None/1.0.0 to /home/arjun/.cache/huggingface/datasets/parquet/1.0.0-b3218b6f9a2cf018/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7621.39it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1385.33it/s]
Traceback (most recent call last):
  File "build/inference/language/gpt-j/download_cnndm.py", line 24, in <module>
    dataset = load_dataset(dataset_id, name=dataset_config)
  File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1797, in load_dataset
    builder_instance.download_and_prepare(
  File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 890, in download_and_prepare
    self._download_and_prepare(
  File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1003, in _download_and_prepare
    verify_splits(self.info.splits, split_dict)
  File "/usr/local/lib/python3.8/dist-packages/datasets/utils/info_utils.py", line 100, in verify_splits
    raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1261703785, num_examples=287113, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3785111355, num_examples=861339, shard_lengths=[115705, 115704, 115704, 115705, 121408, 115705, 115704, 45704], dataset_name='parquet')}, {'expected': SplitInfo(name='validation', num_bytes=57732412, num_examples=13368, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=173197236, num_examples=40104, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='test', num_bytes=49925732, num_examples=11490, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=149777196, num_examples=34470, shard_lengths=None, dataset_name='parquet')}]
python3 -m pip install --upgrade datasets

fixed this.