wenet-e2e / wespeaker

Research and Production Oriented Speaker Verification, Recognition and Diarization Toolkit

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

dataset加载数据慢, 导致训练需要n多天

fbweminem5 opened this issue · comments

拉取了wespeaker最新的代码, 发现gpu占用率不能稳定, 经常为0, 猜测是cpu读取数据 太慢造成gpu空闲
batch size等dataloader配置都是默认值, 跑1000个batch差不多需要0.5h小时, 如果按照提供的配置来看,
在voxceleb数据集上训练150个epoch需要好几天

我的代码:

train_label = "data/vox2_dev/utt2spk"
train_utt_spk_list = read_table(train_label)
spk2id_dict = spk2id(train_utt_spk_list)

dataset_start_time = time()
dataset = Dataset("shard", "data/vox2_dev/shard.list",
                  {'aug_prob': 0.6,
                   'fbank_args': {
                       'dither': 1.0,
                       'frame_length': 25,
                       'frame_shift': 10,
                       'num_mel_bins': 80},
                   'filter': True,
                   'filter_args': {'max_num_frames': 800, 'min_num_frames': 100},
                   'num_frms': 200,
                   'resample_rate': 16000,
                   'sample_num_per_epoch': 0,
                   'shuffle': True,
                   'shuffle_args': {'shuffle_size': 2500},
                   'spec_aug': True,
                   'spec_aug_args': {'max_f': 8,
                                     'max_t': 10,
                                     'num_f_mask': 1,
                                     'num_t_mask': 1,
                                     'prob': 0.6},
                   'speed_perturb': True,
                   },
                  spk2id_dict,
                  reverb_lmdb_file='data/rirs/lmdb',
                  noise_lmdb_file='data/musan/lmdb',
                  )

dataloader_args = {
    "batch_size": 128,
    "num_workers": 16,
    "pin_memory": False,
    "prefetch_factor": 100,
    "drop_last": True
}

train_loader = DataLoader(dataset, **dataloader_args)
for i, batch in enumerate(train_loader):
    if i >= 1000:
        break
dataloader_end_time = time()
print(f"Dataset loading time: {dataloader_end_time - dataset_start_time}s")

使用cProfile分析过这段代码, 目前发现瓶颈在selector.poll

Dataset loading time: 2199.590108156204s
5649461 function calls (5352420 primitive calls) in 2205.473 seconds
....
ncalls tottime percall cumtime percall filename:lineno(function)
986 0.006 0.000 2179.610 2.211 connection.py:253(poll)
986 0.010 0.000 2179.602 2.211 connection.py:423(_poll)
1002 0.027 0.000 2192.961 2.189 connection.py:917(wait)
1 0.002 0.002 0.501 0.501 dataloader.py:1037(init)
17 0.000 0.000 0.000 0.000 dataloader.py:1112()
1 0.001 0.001 0.289 0.289 dataloader.py:1117(_reset)
1 0.000 0.000 0.000 0.000 dataloader.py:1133()
986 0.016 0.000 2182.248 2.213 dataloader.py:1150(_try_get_data)
1 0.000 0.000 0.001 0.001 dataloader.py:122(DataLoader)
774 0.005 0.000 2182.253 2.819 dataloader.py:1296(_get_data)
501 0.023 0.000 2182.625 4.357 dataloader.py:1329(_next_data)
2101 0.017 0.000 0.631 0.000 dataloader.py:1378(_try_put_index)
501 0.003 0.000 0.347 0.001 dataloader.py:1398(_process_data)
16 0.000 0.000 0.001 0.000 dataloader.py:1405(_mark_worker_as_unavailable)
1 0.000 0.000 13.374 13.374 dataloader.py:1431(_shutdown_workers)
1 0.000 0.000 13.374 13.374 dataloader.py:1509(del)
2101 0.004 0.000 0.325 0.000 dataloader.py:670(_next_index)
501 0.017 0.000 2183.442 4.358 dataloader.py:676(next)
1 2.272 2.272 2205.477 2205.477 dataset.py:18()
986 0.026 0.000 2182.194 2.213 queues.py:98(get)
1002 0.025 0.000 2192.872 2.188 selectors.py:403(select)
1002 2192.841 2.188 2192.842 2.188 {method 'poll' of 'select.poll' objects}

image
我的环境配置:
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.10.2
Libc version: glibc-2.27

Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.0.221
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
GPU 4: GeForce RTX 2080 Ti
GPU 5: GeForce RTX 2080 Ti
GPU 6: GeForce RTX 2080 Ti
GPU 7: GeForce RTX 2080 Ti

Nvidia driver version: 450.57
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.12.1+cu102
[pip3] torchaudio==0.12.1+cu102
[pip3] torchnet==0.0.4
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.13.1+cu102
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.12.1+cu102 pypi_0 pypi
[conda] torchaudio 0.12.1+cu102 pypi_0 pypi
[conda] torchnet 0.0.4 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.13.1+cu102 pypi_0 pypi

感谢你的分析,看样子是卡在网络通信和数据读写这块。
因为训练有on-the-fly的data augmentation操作,会比较吃cpu和IO,所以目前num-worker数默认设置得比较高,我们也没有太好的改善方法。如果你有好的解决思路,非常欢迎分享和contribute!

如果你机器可用的没有那么多cpu,但是设置了更多的num_workers, 可能会因为cpu 资源竞争,上下文切换等原因变得更慢,你可以确认一下你的机器是不是没有这么多cpu 可用,然后适量减少num_workers