apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Home Page:https://mxnet.apache.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DataLoader with workers not compatible with ImageRecordDataset

jwfromm opened this issue · comments

Description

Using a DataLoader with a non-zero number of workers on a ImageRecordDataset crashes. Being able to have multiple workers is essential to high speed training, and is supported when using ImageRecordIters, so it should be possible with DataLoaders, which have a much nicer API.

Environment info (Required)

----------Python Info----------
Version      : 3.6.4
Compiler     : GCC 7.2.0
Build        : ('default', 'Jan 16 2018 18:10:19')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 9.0.1
Directory    : /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version      : 1.1.0
Directory    : /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet
Commit Hash   : 07a83a0325a3d782513a04f47d711710972cb144
----------System Info----------
Platform     : Linux-4.13.0-32-generic-x86_64-with-debian-stretch-sid
system       : Linux
node         : 243bb3cedee3
release      : 4.13.0-32-generic
version      : #35~16.04.1-Ubuntu SMP Thu Jan 25 10:13:43 UTC 2018
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                12
On-line CPU(s) list:   0-11
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz
Stepping:              1
CPU MHz:               3600.001
CPU max MHz:           4000.0000
CPU min MHz:           1200.0000
BogoMIPS:              7200.00
Virtualization:        VT-x
Hypervisor vendor:     vertical
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-11
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0254 sec, LOAD: 0.5580 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1541 sec, LOAD: 0.0660 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1920 sec, LOAD: 0.1901 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1562 sec, LOAD: 1.5483 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0516 sec, LOAD: 0.1203 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0299 sec, LOAD: 0.0624 sec.

Package used (Python/R/Scala/Julia):
I'm using Python 3.6

Build info (Required if built from source)

Pip Install

Error Message:

Process Process-1:
Traceback (most recent call last):
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/gluon/data/dataloader.py", line 119, in worker_loop
    batch = batchify_fn([dataset[i] for i in samples])
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/gluon/data/dataloader.py", line 119, in <listcomp>
    batch = batchify_fn([dataset[i] for i in samples])
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/gluon/data/vision/datasets.py", line 284, in __getitem__
    record = super(ImageRecordDataset, self).__getitem__(idx)
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/gluon/data/dataset.py", line 180, in __getitem__
    return self._record.read_idx(self._record.keys[idx])
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/recordio.py", line 265, in read_idx
    return self.read()
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/recordio.py", line 163, in read
    ctypes.byref(size)))
  File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/base.py", line 146, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [00:59:29] src/recordio.cc:65: Check failed: header[0] == RecordIOWriter::kMagic Invalid RecordIO File

Stack trace returned 10 entries:
[bt] (0) /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2a9e78) [0x7fa348419e78]
[bt] (1) /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x29531f3) [0x7fa34aac31f3]
[bt] (2) /opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/mxnet/libmxnet.so(MXRecordIOReaderReadRecord+0x1e) [0x7fa34a54c67e]
[bt] (3) /opt/conda/envs/pytorch-py3.6/lib/python3.6/lib-dynload/../../libffi.so.6(ffi_call_unix64+0x4c) [0x7fa38c088ec0]
[bt] (4) /opt/conda/envs/pytorch-py3.6/lib/python3.6/lib-dynload/../../libffi.so.6(ffi_call+0x22d) [0x7fa38c08887d]
[bt] (5) /opt/conda/envs/pytorch-py3.6/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(_ctypes_callproc+0x2ce) [0x7fa38c29ddee]
[bt] (6) /opt/conda/envs/pytorch-py3.6/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(+0x12825) [0x7fa38c29e825]
[bt] (7) /opt/conda/envs/pytorch-py3.6/bin/python(_PyObject_FastCallDict+0x8b) [0x5556015941bb]
[bt] (8) /opt/conda/envs/pytorch-py3.6/bin/python(+0x19cd3e) [0x555601621d3e]
[bt] (9) /opt/conda/envs/pytorch-py3.6/bin/python(_PyEval_EvalFrameDefault+0x30a) [0x55560164619a]

Minimum reproducible example

# Assumes you have a ImageRecord to read from
test = mx.gluon.data.vision.datasets.ImageRecordDataset("/data/imagenet/val.rec")
test_data = mx.gluon.data.DataLoader(test, batch_size=64, num_workers=2)
for data, label in test_data:
    print(data)
    break

Steps to reproduce

Run the above code.

What have you tried to solve it?

Would require changes to how ImageRecordDatasets access the records.

Same issue here, that's pretty problematic for preprocessing-heavy datasets.

Having done a bit of digging on that, here is what I think is happening. When the children processes are created they get all get a copy of the ImageRecordDataset. This ImageRecordDataset is holding an open read handle on a .rec file. When other workers are trying to call .seek(index) to read data this is conflicting with the first one who is currently reading bytes from the file. If that's the case, a solution could be in the worker loop to open a new file handle on the .rec file

I tested the above solution, looks like it does solve the issue. Not sure what would be the best way to properly implement it though. Have a 'reinitialize()' function at the DataSet level?

hot-fix for this problem: use at your own risk:

import mxnet as mx
from mxnet import gluon
from mxnet.gluon.data import RecordFileDataset
from mxnet.gluon.data.dataloader import DataLoader
from mxnet import recordio

# We keep the filename as an attribute
# So that we can open a new handle per process
# in the dataloader

def __init__new(self, filename):
    self._filename = filename
    self.reinitialize()
    
def reinitialize(self):
    idx_file = os.path.splitext(self._filename)[0] + '.idx'
    self._record = recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
    
RecordFileDataset.reinitialize = reinitialize
RecordFileDataset.__init__ = __init__new

# We modify the dataloader worker_loop to reinit the dataset if possible
# And then call to the original worker_loop

gluon.data.dataloader.worker_loop_old = gluon.data.dataloader.worker_loop

def worker_loop_new(dataset, key_queue, data_queue, batchify_fn):
    if 'reinitialize' in dir(dataset):
        dataset.reinitialize()
    gluon.data.dataloader.worker_loop_old(dataset, key_queue, data_queue, batchify_fn)

gluon.data.dataloader.worker_loop = worker_loop_new

If this is an acceptable fix, it'd be great to get it in the master branch since I'm sure other people will start hitting this error soon.

@jwfromm I will try to work on a PR this week if times allow

Unfortunately, it seems like this fix doesn't always work. When attempting to use it on a an imagenet record, I got the following the error

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "<ipython-input-13-e05cb056ee0e>", line 24, in worker_loop
    mx.gluon.data.dataloader.worker_loop_(dataset, key_queue, data_queue, batchify_fn)
  File "<ipython-input-9-e05cb056ee0e>", line 24, in worker_loop
    mx.gluon.data.dataloader.worker_loop_(dataset, key_queue, data_queue, batchify_fn)
  File "<ipython-input-9-e05cb056ee0e>", line 24, in worker_loop
    mx.gluon.data.dataloader.worker_loop_(dataset, key_queue, data_queue, batchify_fn)
  File "<ipython-input-9-e05cb056ee0e>", line 24, in worker_loop
    mx.gluon.data.dataloader.worker_loop_(dataset, key_queue, data_queue, batchify_fn)
  [Previous line repeated 2937 more times]
  File "<ipython-input-9-e05cb056ee0e>", line 23, in worker_loop
    dataset.reinit()
  File "<ipython-input-13-e05cb056ee0e>", line 11, in reinit
    self._record = mx.recordio.MXIndexedRecordIO(idx_file, self._filename, 'r')
  File "/incubator-mxnet/python/mxnet/recordio.py", line 199, in __init__
    super(MXIndexedRecordIO, self).__init__(uri, flag)
  File "/incubator-mxnet/python/mxnet/recordio.py", line 69, in __init__
    self.open()
  File "/incubator-mxnet/python/mxnet/recordio.py", line 205, in open
    self.fidx = open(self.idx_path, self.flag)
  File "/opt/conda/lib/python3.6/codecs.py", line 309, in __init__
    IncrementalDecoder.__init__(self, errors)
RecursionError: maximum recursion depth exceeded
Exception ignored in: <bound method MXRecordIO.__del__ of <mxnet.recordio.MXIndexedRecordIO object at 0x7f0489973588>>
Traceback (most recent call last):
  File "/incubator-mxnet/python/mxnet/recordio.py", line 84, in __del__
    self.close()
  File "/incubator-mxnet/python/mxnet/recordio.py", line 218, in close
    self.fidx.close()
AttributeError: 'NoneType' object has no attribute 'close'

it looks like you might be running the fix twice, which would point worker loop to itself ?

Although I'm not running it twice, this error is not related to the fix. It look's like its a separate bug entirely as it occurs even without the fix with num_workers set to 1. num_workers at 0 works fine though! I'll have to dig in a little more to see whats going on.

The bug above is due to something on the master branch, when I revert to V1.1, your fix works great!

@jwfromm opened a PR with the fix #10628

The PR above is closed, but if we cannot use record file in dataloader with multiprocessing, it is confusing.
Is anyone still working on this issue?

I still have this problem in 1.3.0

/data1/zj/crnn.gluon/venv/bin/python /data1/zj/crnn.gluon/dataset.py
101
Process Process-5:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 169, in worker_loop
    batch = batchify_fn([dataset[i] for i in samples])
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 169, in <listcomp>
    batch = batchify_fn([dataset[i] for i in samples])
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataset.py", line 131, in __getitem__
    item = self._data[idx]
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/vision/datasets.py", line 257, in __getitem__
    record = super(ImageRecordDataset, self).__getitem__(idx)
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataset.py", line 189, in __getitem__
    return self._record.read_idx(self._record.keys[idx])
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/recordio.py", line 265, in read_idx
    return self.read()
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/recordio.py", line 163, in read
    ctypes.byref(size)))
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/base.py", line 252, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [11:40:51] src/recordio.cc:65: Check failed: header[0] == RecordIOWriter::kMagic Invalid RecordIO File

Stack trace returned 10 entries:
[bt] (0) /data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x36bac2) [0x7fe0e5734ac2]
[bt] (1) /data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x36d5f83) [0x7fe0e8a9ef83]
[bt] (2) /data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(MXRecordIOReaderReadRecord+0x2a) [0x7fe0e8266bba]
[bt] (3) /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c) [0x7fe1048bce20]
[bt] (4) /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call+0x2eb) [0x7fe1048bc88b]
[bt] (5) /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(_ctypes_callproc+0x49a) [0x7fe1048b701a]
[bt] (6) /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(+0x9fcb) [0x7fe1048aafcb]
[bt] (7) /data1/zj/crnn.gluon/venv/bin/python(PyObject_Call+0x47) [0x5c1797]
[bt] (8) /data1/zj/crnn.gluon/venv/bin/python(PyEval_EvalFrameEx+0x4ec6) [0x53bba6]
[bt] (9) /data1/zj/crnn.gluon/venv/bin/python(PyEval_EvalFrameEx+0x4b04) [0x53b7e4]


Traceback (most recent call last):
  File "/data1/zj/crnn.gluon/dataset.py", line 148, in <module>
    for i, (img, label) in enumerate(data_loader):
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 242, in __next__
    if self._rcvd_idx in self._data_buffer:
KeyboardInterrupt
Process Process-1:
Process Process-2:
Process Process-3:
Process Process-4:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 166, in worker_loop
    idx, samples = key_queue.get()
  File "/usr/lib/python3.5/multiprocessing/queues.py", line 94, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.5/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.5/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 166, in worker_loop
    idx, samples = key_queue.get()
  File "/usr/lib/python3.5/multiprocessing/queues.py", line 93, in get
    with self._rlock:
  File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/data1/zj/crnn.gluon/venv/lib/python3.5/site-packages/mxnet/gluon/data/dataloader.py", line 166, in worker_loop
    idx, samples = key_queue.get()
  File "/usr/lib/python3.5/multiprocessing/queues.py", line 93, in get
    with self._rlock:
  File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt

Process finished with exit code 1

the code is

from mxnet.gluon.data import DataLoader
from mxnet.gluon.data.vision.datasets import ImageRecordDataset

from mxnet.gluon.data.vision.transforms import ToTensor

dataset = ImageRecordDataset('/data1/zj/data/crnn/txt/val.rec')
data_loader = DataLoader(dataset.transform_first(ToTensor()), 1, shuffle=True, num_workers=6)
print(len(dataset))
start = time.time()
for i, (img, label) in enumerate(data_loader):
    if (i + 1) % 10 == 0:
        print(time.time() - start)
        start = time.time()

@WenmuZhou This should properly fixes for all kinds of situations: #12554
We will officially include it soon as it may affect multiple users.

@zhreshold waiting for update

@jwfromm @WenmuZhou The fix proposed in PR #12554 has been merged.
Can you verify if the issue is resolved and can be closed ?

@jwfromm @WenmuZhou Verified that both of the issues mentioned are not reproducible on the current master branch. PR #12554 should have fixed those. I am closing this issue. Please feel free to reopen if closed in error or if you still encounter this issue. Thanks!

I have test my code with mxnet-cu80 (1.5.0b20190221), this bug has fixed, thanks

FYI this is incompatible with thread_pool=True in DataLoader. (False is the default)