thevasudevgupta / gsoc-wav2vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Home Page:https://thevasudevgupta.github.io/gsoc-wav2vec2/assets/final_report

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Discussion

thevasudevgupta opened this issue · comments

Hey @sayakpaul, @MorganR,

I have few questions before I can start training the model:

  1. LibriSpeech dataset is available in .flac format which can be read using tensorflow_io. But AFAIU cloud TPU's uses special build of TensorFlow and tensorflow_io is not working with that version. Is there any work around to this problem??
  2. There are multiple variants of librispeech dataset- 100h, 360h, 500h (see this). 100h takes 6.3 GB, 360h takes 23 GB, 500h takes 30 GB disk space in compressed form. Best model in paper is obtained by training on combination of all datasets (i.e 960h). Which one dataset should I target for?? OR should I target 960h only (dataset will be quite large in uncompressed form) ??

Thanks!

  1. I have used tensorflow_io with Cloud TPUs. Have you tried with an updated version on the TPU hardware? Also, if you could paste the error you are facing it'd be great. Also, how are you reading data while on TPUs?
  2. Since we are aiming for SOTA models I think we should go for 500h at least. @MorganR what do you think?

@sayakpaul,

I am getting this error (when running python3 main.py from vg branch):

WARNING:tensorflow:AutoGraph could not transform <function decode_flac at 0x7f82400a6040> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: unable to open file: libtensorflow_io.so, from paths: ['/home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['libtensorflow_framework.so.2: cannot open shared object file: No such file or directory']
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2021-06-26 14:48:16.704169: E tensorflow/core/framework/op_kernel.cc:1623] OpKernel ('op: "TpuHandleToProtoKey" device_type: "CPU"') for unknown op: TpuHandleToProtoKey
2021-06-26 14:48:16.704211: E tensorflow/core/framework/op_kernel.cc:1623] OpKernel ('op: "TPURoundRobin" device_type: "CPU"') for unknown op: TPURoundRobin
2021-06-26 14:48:16.904596: E tensorflow/core/framework/op_kernel.cc:1623] OpKernel ('op: "TpuHandleToProtoKey" device_type: "CPU"') for unknown op: TpuHandleToProtoKey
2021-06-26 14:48:16.904651: E tensorflow/core/framework/op_kernel.cc:1623] OpKernel ('op: "TPURoundRobin" device_type: "CPU"') for unknown op: TPURoundRobin
Traceback (most recent call last):
  File "main.py", line 135, in <module>
    main(args, resolver)
  File "main.py", line 70, in main
    tr_dataset = LibriSpeechDataLoader(tr_data_args)(seed=args.seed)
  File "/home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py", line 43, in __call__
    dataset = self._build_and_fetch_dataset()
  File "/home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py", line 96, in _build_and_fetch_dataset
    input_dataset = input_dataset.map(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map
    return ParallelMapDataset(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 4733, in __init__
    self._map_func = StructuredFunctionWrapper(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3923, in __init__
    self._function = fn_factory()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3143, in get_concrete_function
    graph_function = self._get_concrete_function_garbage_collected(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3109, in _get_concrete_function_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3456, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3291, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3898, in wrapped_fn
    ret = wrapper_helper(*args)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3828, in wrapper_helper
    ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
    raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in user code:

    /home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py:106 decode_sound  *
        audio = tfio.audio.decode_flac(audio, dtype=tf.int16)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/audio_ops.py:480 decode_flac  **
        return core_ops.io_audio_decode_flac(input, shape=shape, dtype=dtype, name=name)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:88 __getattr__
        return getattr(self._load(), attrb)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:84 _load
        self._mod = _load_library(self._library)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:69 _load_library
        raise NotImplementedError(

    NotImplementedError: unable to open file: libtensorflow_io.so, from paths: ['/home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
    caused by: ['libtensorflow_framework.so.2: cannot open shared object file: No such file or directory']

Traceback (most recent call last):
  File "main.py", line 135, in <module>
    main(args, resolver)
  File "main.py", line 70, in main
    tr_dataset = LibriSpeechDataLoader(tr_data_args)(seed=args.seed)
  File "/home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py", line 43, in __call__
    dataset = self._build_and_fetch_dataset()
  File "/home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py", line 96, in _build_and_fetch_dataset
    input_dataset = input_dataset.map(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map
    return ParallelMapDataset(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 4733, in __init__
    self._map_func = StructuredFunctionWrapper(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3923, in __init__
    self._function = fn_factory()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3143, in get_concrete_function
    graph_function = self._get_concrete_function_garbage_collected(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3109, in _get_concrete_function_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3456, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 3291, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3898, in wrapped_fn
    ret = wrapper_helper(*args)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 3828, in wrapper_helper
    ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
    raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in user code:

    /home/vasudevgupta/gsoc-wav2vec2/src/data_utils.py:106 decode_sound  *
        audio = tfio.audio.decode_flac(audio, dtype=tf.int16)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/audio_ops.py:480 decode_flac  **
        return core_ops.io_audio_decode_flac(input, shape=shape, dtype=dtype, name=name)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:88 __getattr__
        return getattr(self._load(), attrb)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:84 _load
        self._mod = _load_library(self._library)
    /home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/__init__.py:69 _load_library
        raise NotImplementedError(

    NotImplementedError: unable to open file: libtensorflow_io.so, from paths: ['/home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
    caused by: ['libtensorflow_framework.so.2: cannot open shared object file: No such file or directory']

After this error message, if I install tensorflow_io again with pip3 install tensorflow_io , it is able to load file but I loss access to TPUs.

I am using new cloud TPUs in which we can directly SSH (gcloud alpha compute tpus tpu-vm). I am loading data directly as specified in my LibriSpeechDataLoader.

NotImplementedError: unable to open file: libtensorflow_io.so, from paths: ['/home/vasudevgupta/.local/lib/python3.8/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['libtensorflow_framework.so.2: cannot open shared object file: No such file or directory']

This is the root cause. TPUs cannot read from your local filesystem. You need to either host your entire dataset inside a GCS bucket or create TFRecords out of your dataset and store them inside a GCS Bucket. Follow these guidelines to quickly set up an AI Platform Notebook instance (attached to a TPU). That way you can a cheap VM that has TPU attached, a terminal, and everything else that you may need.

If you use that script, be sure to verify its arguments and modify it so that it installs the latest TensorFlow version. Let me know if anything is unclear.

Thanks for reply.

A small follow up question: Does this hold for alpha-TPUs (which can be used just like GPUs) as well (like earlier I trained some model just like I am doing now & it worked for me)?? (Again I have never used normal version of TPUs before so not sure if I asking a relevant question).

Good question. This probably has to do with compatibility issues of TPUs and large datasets that do not fit in memory. What dataset did you use to test-drive the alpha TPU VMs? Could you also comment on its size?

On the contrary, I haven't tried them out yet.

I have used natural-questions dataset (~100 GB size). Though that project was in flax so may be in tensorflow, i need to load from gcs only.

Likely. If you have already used a 100 GB dataset then I would like us to consider the 960h version of the dataset for this project. @MorganR what do you think?

I have converted the dataset in .tfrecord format & its working. So, 1st issue is resolved. After converting into .tfrecord, whole dataset takes around ~280 GB on disk.

Thanks!!

Yes, working on 960h now.