mosaicml / llm-foundry

LLM training code for Databricks foundation models

Home Page:https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error loading JSON fine-tuning datasets

lorabit110 opened this issue · comments

Environment

Collecting system information...
---------------------------------
System Environment Report        
Created: 2024-01-10 00:14:47 UTC
---------------------------------

PyTorch information
-------------------
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31

Python version: 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1013-oracle-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB

Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          256
On-line CPU(s) list:             0-127
Off-line CPU(s) list:            128-255
Thread(s) per core:              1
Core(s) per socket:              64
Socket(s):                       2
NUMA node(s):                    8
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           1
Model name:                      AMD EPYC 7J13 64-Core Processor
Stepping:                        1
Frequency boost:                 enabled
CPU MHz:                         2550.000
CPU max MHz:                     3673.0950
CPU min MHz:                     1500.0000
BogoMIPS:                        4900.16
Virtualization:                  AMD-V
L1d cache:                       4 MiB
L1i cache:                       4 MiB
L2 cache:                        64 MiB
L3 cache:                        512 MiB
NUMA node0 CPU(s):               0-15
NUMA node1 CPU(s):               16-31
NUMA node2 CPU(s):               32-47
NUMA node3 CPU(s):               48-63
NUMA node4 CPU(s):               64-79
NUMA node5 CPU(s):               80-95
NUMA node6 CPU(s):               96-111
NUMA node7 CPU(s):               112-127
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2:        Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm

Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.1.0
[pip3] torch-optimizer==0.3.0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.0.3
[pip3] torchtext==0.16.2
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[pip3] triton-pre-mlir==2.0.0
[conda] Could not collect


Composer information
--------------------
Composer version: 0.17.2
Composer commit hash: None
Host processor model name: AMD EPYC 7J13 64-Core Processor
Host processor core count: 128
Number of nodes: 0
Accelerator model name: NVIDIA A100-SXM4-40GB
Accelerators per node: 8
CUDA Device Count: 8

To reproduce

Use scripts/train.py with a yaml file with below config.

  train_loader:
    name: finetuning
    dataset:
      ############
      hf_name: json
      hf_kwargs:
        data_files: gs://foo/bar.json
      split: train
      ############
      shuffle: true
      max_seq_len: ${max_seq_len}
      allow_pad_trimming: false
      decoder_only_format: true
    drop_last: true
    num_workers: 8
    pin_memory: false
    prefetch_factor: 2
    persistent_workers: true
    timeout: 0

You will get the below error when loading the dataset (before downloading pretrained model weights):

Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/orby/llm-foundry/scripts/train/benchmarking/collect_results.py' with error <class 'pyarrow.lib.ArrowInvalid'>: J
SON parse error: Invalid value. in row 0
2024-01-09 20:57:58,097: rank7[2956][MainThread]: ERROR: datasets.packaged_modules.json.json: Failed to read file '/orby/llm-foundry/scripts/train/benchmarking/collect_results.py' with error
 <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0

Generating train split: 0 examples [00:00, ? examples/s]
2024-01-09 20:57:58,371: rank7[2956][MainThread]: ERROR: llmfoundry.data.finetuning.tasks: Error during data prep
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables
    dataset = json.load(f)
  File "/usr/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Expected behavior

It should process the dataset properly.

Additional context

commit 083b4b2 works for us. But the latest main branch doesn't.
f0fd749 might have caused the issue. We didn't test commits between the current main and 083b4b2.

@lorabit110 Thank you for reporting! #853 should fix it. It was due to a typo.