martin-gorner / tensorflow-mnist-tutorial

Sample code for "Tensorflow and deep learning, without a PhD" presentation and code lab.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Getting <urlopen error [Errno 110] Connection timed out>

paolorotolo opened this issue · comments

paolorotolo@linux-jwyh:~/dev/tensorflow-codelab/tensorflow-mnist-tutorial> python3 mnist_1.0_softmax.py 
Traceback (most recent call last):
  File "/usr/lib64/python3.6/urllib/request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/usr/lib64/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/usr/lib64/python3.6/http/client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/usr/lib64/python3.6/socket.py", line 722, in create_connection
    raise err
  File "/usr/lib64/python3.6/socket.py", line 713, in create_connection
    sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "mnist_1.0_softmax.py", line 39, in <module>
    mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
  File "/home/paolorotolo/.local/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 211, in read_data_sets
    SOURCE_URL + TRAIN_IMAGES)
  File "/home/paolorotolo/.local/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 208, in maybe_download
    temp_file_name, _ = urlretrieve_with_retry(source_url)
  File "/home/paolorotolo/.local/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 165, in wrapped_fn
    return fn(*args, **kwargs)
  File "/home/paolorotolo/.local/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 190, in urlretrieve_with_retry
    return urllib.request.urlretrieve(url, filename)
  File "/usr/lib64/python3.6/urllib/request.py", line 248, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/usr/lib64/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/usr/lib64/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/usr/lib64/python3.6/urllib/request.py", line 1346, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/usr/lib64/python3.6/urllib/request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 110] Connection timed out>

You probably did not have internet connectivity when you tested. On the first run, the script downloads the MNIST training data. I just re-tested and it works for me.

I also get the same error, I have internet unless something needs to be enabled to pipe it into tensorflow

It looks like http://yann.lecun.com is down right now - it is probably why it does not work.
Maybe it make sense to have mirror somewhere else in addition to original source - so in this case download can back up to another source.

Especially keeping in mind your (Martin) talk on GCP-Next :) next week.

Yeah, it was down last week too, it's not an issue on our end.

The files need to download just once. The script does not redownload them if they are already there. But yes, something will need to be done if the data site remains down. I cannot get to it either...

It looks like http://yann.lecun.com is still down and it cannot get the data set... Is there anywhere else we can get the training data?

I have the same issue here, do we have a mirror to download MNIST data?

my data folder content from that project, you can download it and put it inside your data folder

t10k-images-idx3-ubyte.gz
t10k-labels-idx1-ubyte.gz
train-images-idx3-ubyte.gz
train-labels-idx1-ubyte.gz

Thank you @kakawait. Does anyone have any checksum/signature reference to check these files?
EDIT: they work on Linux but not on Windows. Any idea?

@auserdude The files worked on Windows 8 for me, I downloaded the files and changed the data directory when of the example code from https://www.tensorflow.org/get_started/summaries_and_tensorboard to the correct directory. (using command line flags with python testfile.py --logdir C:\path\to\logdir and running tensorboard with tensorboard --logdir=log:C:\path\to\logdir )

@auserdude Files have been generated/downloaded when launching that project on MacOS. So I didn't test on other platform. But I don't think is platform dependent, is just images with labels :)

Otherwise if you're bit afraid to download something from unknown source (here, me) you can use archive machine https://web.archive.org/web/20160828233817/http://yann.lecun.com/exdb/mnist/index.html

I tried it works, and it can be more safer but more slower :)

In addition there is an issue on tensorflow/tensorflow#6742. Feel free to contribute or add 👍

The site is back so i am closing this

kakawait‘s answer works for me. Thank you !