gzinck / docker-eval

Source code for an experiment evaluating to what extent docker can make experimental results repeatable, regardless of the platform and hardware.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

docker-eval

Source code for an experiment evaluating to what extent docker can make experimental results repeatable, regardless of the platform and hardware.

Running the Benchmarks

  1. Install Docker here.
  2. Build the opencv-py container with:
docker build --tag opencv-py:1.0 .
  1. If you are on mac/linux, create and run the docker container with:
docker run -dit \
	--mount type=bind,source="$(pwd)/benchmarks",target=/benchmarks \
	--mount type=bind,source="$(pwd)/output",target=/output \
	--name opencv-py \
	opencv-py:1.0

If you are on Windows with PowerShell, use:

docker run -dit \
	--mount type=bind,source=${PWD}/benchmarks,target=/benchmarks \
	--mount type=bind,source=${PWD}/output,target=/output \
	--name opencv-py \
	opencv-py:1.0

This will start the container in the background (hence the -dit flags). It also mounts two folders:

  • The /benchmarks bind mount makes the benchmarks available in the container.
  • The /output bind mount makes a folder available for any benchmark output which can be checked to ensure consistent results of image processing tasks. The results are available on the host machine and the docker container.
  1. Run the benchmarks with:
docker exec -it opencv-py python3 benchmark.py
  1. To ssh into the container, run:
docker exec -it opencv-py /bin/bash
  1. To stop and start the container, run:
docker stop opencv-py # stops the container
docker start opencv-py # starts the container
  1. To permanently remove a container, run:
docker rm --force opencv-py

About

Source code for an experiment evaluating to what extent docker can make experimental results repeatable, regardless of the platform and hardware.


Languages

Language:Jupyter Notebook 95.9%Language:Python 3.2%Language:Shell 0.5%Language:Dockerfile 0.3%