salekd / rpizero_smart_camera3

Smart security camera with Raspberry Pi Zero and OpenFaaS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dockerfile fails to build

alexellis opened this issue · comments

The Dockerfile fails to build, please could you check it over?

You may also want to try a newer watchdog version.

Thanks,

Alex

update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up protobuf-compiler (2.6.1-1.3) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for systemd (229-4ubuntu21.15) ...
Processing triggers for ca-certificates (20170717~16.04.2) ...
Updating certificates in /etc/ssl/certs...
148 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Pulling watchdog binary from Github.
Cloning into 'models'...


object_detection/protos/calibration.proto:34:3: Expected "required", "optional", or "repeated".
object_detection/protos/calibration.proto:34:6: Expected field name.
object_detection/protos/calibration.proto:48:3: Expected "required", "optional", or "repeated".
object_detection/protos/calibration.proto:48:6: Expected field name.
The command '/bin/sh -c apt-get update && apt-get install -y     curl     git     protobuf-compiler     python-pip python-dev build-essential     python-tk     wget     && echo "Pulling watchdog binary from Github."     && curl -sSL https://github.com/openfaas/faas/releases/download/0.6.9/fwatchdog > /usr/bin/fwatchdog     && chmod +x /usr/bin/fwatchdog     && git clone https://github.com/tensorflow/models.git     && cd /models/research/     && protoc object_detection/protos/*.proto --python_out=.     && cd /     && wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz     && tar -zxvf ssd_mobilenet_v1_coco_11_06_2017.tar.gz' returned a non-zero code: 1

The error would appear to be coming from here:

  && git clone https://github.com/tensorflow/models.git \
    && cd /models/research/ \
    && protoc object_detection/protos/*.proto --python_out=.

I have fixed the Dockerfile and added versions so hopefully it will stay future-proof in this way.

Please note that this image is based on Ubuntu. If you intend to run on Raspberry Pi Zero, we need to work on Dockerfile.rpizero

What if we got it working on RPi3 B+ instead of RPi Zero?

I also wondered if we could try the newer template for OpenFaaS Python to see if the model is faster if it's preloaded in memory?

I made one more update - take a look at the Dockerfile, it is based on a newer python3 template. You can compare the new and the old templates by deploying salekd/faas-mobilenet:1.1.0 and salekd/faas-mobilenet:1.0.0, respectively.

What exactly do you mean by preloading a model into memory? I do not think I am doing that, everything is implemented in the handle function.

As for running on Raspberry Pi it should be straightforward with the following modifications:

What exactly do you mean by preloading a model into memory? I do not think I am doing that, everything is implemented in the handle function.

The python3-flask template available via faas-cli template store pull can preload the model and engine having it ready to do inferences much much quicker.

Alex

Hi Alex,

After a few hours of compiling the image for Raspberry Pi 3 B+ is ready here: https://hub.docker.com/r/salekd/faas-mobilenet-rpi
It will be great if you find time to test it!

For Raspberry Pi Zero, it is just a matter of running docker build for half a day. I have opened a new issue for this one until it is done. #2

What a nice idea using flask! I guess it is no longer a pure serverless, but it should serve its purpose well in this case. It actually depends on how much time is taken by loading a model and how much time is taken by the inference itself. I opened an issue for this one too: #3

Cheers,

David

We could also try doing an echo to stderr when we know we are running the inference, or as an output parameter in the JSON? I'd be curious to know how long this portion takes in CPU.