lifeiteng / vall-e

PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html

Home Page:https://lifeiteng.github.io/valle/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

multi-GPU Pickle-load issue in prefix-mode 4

orantake opened this issue · comments

Hi.
When training LibriTTS VALL-E with prefix mode 4, it causes pickle-load issue.
Can you solve this problem?
The error is in the below.
I think the error is caused from defaultdict.

Traceback (most recent call last):
  File "/home/work/tts/user/Model/TTS/VALL-E/vall-e.kt/egs/LibriTTS/bin/trainer.py", line 1214, in <module>
    main()
  File "/home/work/tts/user/Model/TTS/VALL-E/vall-e.kt/egs/LibriTTS/bin/trainer.py", line 1205, in main
    mp.spawn(run, args=(world_size, args), nprocs=world_size, join=True)
  File "/root/miniconda3/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/root/miniconda3/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/root/miniconda3/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 1 terminated with the following error:
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/home/work/tts/user/Model/TTS/VALL-E/vall-e.kt/egs/LibriTTS/bin/trainer.py", line 1093, in run
    train_one_epoch(
  File "/home/work/tts/user/Model/TTS/VALL-E/vall-e.kt/egs/LibriTTS/bin/trainer.py", line 673, in train_one_epoch
    iter_dl = iter(train_dl)
  File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 435, in __iter__
    return self._get_iterator()
  File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 381, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1034, in __init__
    w.start()
  File "/root/miniconda3/lib/python3.10/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/root/miniconda3/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/root/miniconda3/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
    return Popen(process_obj)
  File "/root/miniconda3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/root/miniconda3/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/root/miniconda3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/root/miniconda3/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'PromptedPrecomputedFeatures.__init__.<locals>.<lambda>'