scrapinghub / shub

Scrapinghub Command Line Client

Home Page:https://shub.readthedocs.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Outdated docker client?

hermit-crab opened this issue · comments

Good day. It seems requirements define docker-py dependency which is as far as I can tell appears to be an outdated version of official docker client from https://github.com/docker/docker-py/. Seems they have migrated to https://pypi.python.org/pypi/docker/ while https://pypi.python.org/pypi/docker-py/ still contains a frozen old version. I've found this by stumbling over a .dockerignore related bug where path containing leading slash would be ignored (or maybe they were treated as absolute system path opposed to how .gitignore treats them) docker/docker-py#1674.

Good day, you're right, we have to update it. We had a related discussion long time ago, I believe now it should be safe to just switch to the new docker client as a part of v3.0.

I can only +1 this.

It would show great stewardship by scrapinghub to update and QA docker-py. By now it's been practically 3 years since version 1.10.6:
https://github.com/docker/docker-py/commits/1.10.6

As a workaround we are trying to fall back to docker version 1.5 now which should still be compatible with the even more ancient 1.17 implied by this failing shub image deploy:

requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://docker:2375/v1.17/containers/create

It has been a bit of a rabbit hole so far but I'll update this ticket correspondingly..

Apropos docker 1.5 - this version is not even listed on the official docker API version matrix anymore:
https://docs.docker.com/develop/sdk/#api-version-matrix

If I update requirements.txt to the latest version for docker-py how confident are you in your tests and testing infrastructure? Should they pass, could we merge into master and consequently push out a fresh release?

Just a quick update for anyone having related issues in CI/CD contexts.

Validated workaround for the outdated docker-py in shub:

  1. Manually pull custom built scrapinghub docker image to local docker context ("docker pull ...")
  2. This will then make the image available for "shub deploy ..." (circumventing the need for docker-py to hit any docker HTTP API)

This has been fixed in the meantime, right? Can be closed?