sympy / planet-sympy

The planet SymPy sources

Home Page:https://planet.sympy.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to test the docker container locally?

asmeurer opened this issue · comments

It seems that when you "run" the docker container, it tries to deploy it. How do I just run it locally, without any deploying, to test something out?

Also, I don't think that "run" should deploy. There should be a deploy.sh script or something to deploy, and run should just run the server.

If you run it without a private key set, it will not deploy it:

if [[ "${SSH_PRIVATE_KEY}" == "" ]]; then

So you can then test that it generated the proper things, or anything else you want to test.

When I do that it just runs and then exits. I was hoping it would start a local webserver, or at least copy the built resources somewhere.

There are several ways how you can do that, for example:

docker build -t test/planet-sympy:v1 .
docker run -it test/planet-sympy:v1 bash

this will "log you" into the container, and runs bash. Then execute the script by hand:

./update.sh

This will generate the webpages in planet.sympy.org. So you can then copy it out, etc.

Feel free to submit a PR to start a webserver if some argument is passed to docker run. That's a good idea, it might be easier to see what was generated.

So I take it the site itself is static? Typically when you'd see a docker container that means the container itself serves the site.

OK, I got it working with the following commands:

docker run -p 0.0.0.0:8000:8000 -it username/planet-sympy:v1 bash
./update.sh
cd planet.sympy.org/
python -m SimpleHTTPServer 8000

Then opening my browser to 0.0.0.0:8000.

I don't know if there's a simpler way.

This site is static, because it is served by github pages, so it must be static. We use docker so that all the dependencies are isolated, and we can test it on Travis, as well as locally without issues.

IMHO docker way over complicates things here. You should just be able to build the site from anywhere. Are the dependencies hard to install?

It's quite complicated to ensure that the environment is exactly the same at Travis, and linode, which we used to use to actually run it on demand every 20 min. I since then outsourced that to gitlab, so the only thing that I still have to run from my computer is cron. Everything else is out there. Docker makes it simple, as you just get the image and run it. And it will just work.

Without docker, you have to ensure the dependencies, the unix environment, all the ssh keys, etc. will just work on the machine (wherever it runs) that actually runs it to update the planet every 20 min. Also we want to test that things work when somebody submits a PR, so you must ensure that if things pass on Travis, that this implies passing in production. Which docker pretty much guarantees, but without docker you have no guarantee.

If you have a better technical solution, let me know. This seems like the cleanest, simplest and the most robust to me.

Using docker is fine (although tbh, Travis is already reproducible enough as-is). But it's a pain when that's the only way you can build the thing at all, and there aren't even instructions on how to do it locally.

What sort of differences would you anticipate happening between Docker and non-Docker for this? All it does is run some Python scripts that download and parse RSS feeds and converts them to a webpage.

Oh, to run it locally, just run the ./update.sh script.

You'll quickly find out it's not very reproducible and more importantly, it's a pain to setup a production deployment. Docker makes this easy. But if you don't like Docker, just install the dependencies and run ./update.sh by hand.

IMPORTANT: if you run it without a docker container (https://github.com/sympy/planet-sympy/blob/2d89fdd2e04ba9ca3a0aaebda2f4c4feaff5123f/update.sh), it is fiddling with ssh keys ~/.ssh and also removing some stuff using rm -rf, so you might not want to run this without a docker container.

Feel free to send a PR to make it easier to run without docker, if it would make your life easier.

It would be nicer if there were a separate script, say build.sh, that just ran rawdog, and update.sh called that. That way it separates building and deploying.

Also, a side note, we should add support for GitLab to doctr. Then all this code could be replaced with one line. I opened an issue drdoctr/doctr#189. I'm actually curious if GitLab doesn't have its own GitHub pages competitor and its own tool that could do that too.

Apparently rawdog depends on some libxml2 package that can't be installed from conda (I remember this from before). So it's also a separate issue here, to make rawdog require less annoying things to run (it's also Python 2 only).

I split out the build script here #50

That code that doctr can replace also tests the deployment to a testing repository, to ensure that everything actually works. I don't know if doctr can do that too.

Sure you could run doctr deploy --deploy-repo planet-sympy/planet.sympy.org${REPO_SUFFIX}. I guess you would also need to use doctr configure --no-upload-key and manually use the same deploy key for both repos (or else have two deploy keys).

Doctr basically just manages the deploy key (keeping it encrypted and secret), and the syncing of the files. So basically you don't have to rewrite all that code that you have in update.sh every time you want to auto-deploy something to gh-pages.

But right now it only supports Travis. Someone would need to add support for GitLab.