cleinc / bts

From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ChunkedEncodingError when training nyu dataset

hy-kiera opened this issue · comments

Hello, I cloned the latest master branch and followed the README.md in /pytorch.

I set my environment as same with the recommendation. (pytorch 1.2.0, python 3.6.9, CUDA 10.0 on Ubuntu 18.04, other libraries are the latest version)

I couldn't wget nyu_depth_v2_labeled.mat file(its server didn't be connected!) so, I got the file, which was downloaded few years ago, from my friend.

I successfully did Testing with NYU Depth V2 and Evaluation step.

However, I got the error, ChunkedEncodingError at Preparing for Training step.

python utils/download_from_gdrive.py 1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP ../dataset/nyu_depth_v2/sync.zip

Error

Traceback (most recent call last):
  File "/home/jaram/Desktop/hy/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 437, in _error_catcher
    yield
  File "/home/jaram/Desktop/hy/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 767, in read_chunked
    chunk = self._handle_chunk(amt)
  File "/home/jaram/Desktop/hy/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 711, in _handle_chunk
    value = self._fp._safe_read(amt)
  File "/usr/lib/python3.6/http/client.py", line 624, in _safe_read
    raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(32761 bytes read, 7 more expected)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "~/workspace/bts/venv/lib/python3.6/site-packages/requests/models.py", line 751, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "~/workspace/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 572, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "~/workspace/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 793, in read_chunked
    self._original_response.close()
  File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "~/workspace/bts/venv/lib/python3.6/site-packages/urllib3/response.py", line 455, in _error_catcher
    raise ProtocolError("Connection broken: %r" % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(32761 bytes read, 7 more expected)', IncompleteRead(32761 bytes read, 7 more expected))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "utils/download_from_gdrive.py", line 45, in <module>
    download_file_from_google_drive(file_id, destination)
  File "utils/download_from_gdrive.py", line 33, in download_file_from_google_drive
    save_response_content(response, destination)    
  File "utils/download_from_gdrive.py", line 18, in save_response_content
    for chunk in response.iter_content(CHUNK_SIZE):
  File "~/workspace/bts/venv/lib/python3.6/site-packages/requests/models.py", line 754, in generate
    raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(32761 bytes read, 7 more expected)', IncompleteRead(32761 bytes read, 7 more expected))

How can I solve this error, and plus, how can I get NYU Depth V2 dataset?