snibox / snibox

Self-hosted snippet manager

Home Page:https://snibox.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Snibox-docker won't mount to host volumes.

gkoerk opened this issue · comments

It seems the frontend doesn't like the folders I am trying to bind mount for static_files. I've tried chmod on them and even chown, but I don't know the UID:GID this process expects to own a common static_files directory. Any ideas? Snibox will only start with the docker volumes, and won't start up at all when I try to bind mount to host volumes.

Can you please provide your Dockerfile together with your docker run command so we could understand what exactly you are mounting and where. Any rails logs either from log folder or docker logs showing what is the error rails generates when you running application in container with binded volume would be also extremally helpful.

Original, failing configuration.

Dockerfile:

FROM ruby:2.5.1-alpine3.7

RUN apk add --no-cache -t build-dependencies \
    build-base \
    postgresql-dev \
  && apk add --no-cache \
    git \
    tzdata \
    nodejs \
    yarn

WORKDIR /app

COPY Gemfile Gemfile.lock ./

ENV RAILS_ENV production
ENV RACK_ENV production
ENV NODE_ENV production

RUN gem install bundler && bundle install --deployment --without development test

COPY . ./

RUN SECRET_KEY_BASE=docker ./bin/rake assets:precompile && ./bin/yarn cache clean

Here is the swarm-compatible docker-compose.yml used to attempt to docker stack deploy ...:

version: '3'

services:
  frontend:
    image: snibox/nginx-puma:1.13.8
    ports:
      - "8003:80"
    volumes:
      - /share/appdata/snibox/html:/var/www/html
    depends_on:
      - backend
    networks:
      - internal
      - traefik_public
    deploy:
      labels:
        - traefik.frontend.rule=Host:code.gkoerk.com
        - traefik.docker.network=traefik_public
        - traefik.port=80

  backend:
    image: gkoerk/snibox:latest
    command: sh -c ".bin/rails db:migrate RAILS_ENV=development"rm -rf tmp/pids && ./bin/rails s -p 3000 -b '0.0.0.0'"
    env_file: /share/appdata/config/snibox/snibox.env
# All the folloring ENVIRONMENT variables are set in the env_file specified above.
#    environment:
#      DB_NAME: "${DB_NAME}"
#      DB_USER: "${DB_USER}"
#      DB_PASS: "${DB_PASS}"
#      DB_HOST: "${DB_HOST}"
#      DB_PORT: "${DB_PORT}"
#      FORCE_SSL: "${FORCE_SSL}"
#      MAILGUN_SMTP_PORT: "${MAILGUN_SMTP_PORT}"
#      MAILGUN_SMTP_SERVER: "${MAILGUN_SMTP_SERVER}"
#      MAILGUN_SMTP_LOGIN: "${MAILGUN_SMTP_LOGIN}"
#      MAILGUN_SMTP_PASSWORD: "${MAILGUN_SMTP_PASSWORD}"
#      MAILGUN_API_KEY: "${MAILGUN_API_KEY}"
#      MAILGUN_DOMAIN: "${MAILGUN_DOMAIN}"
#      MAILGUN_PUBLIC_KEY: "${MAILGUN_PUBLIC_KEY}"
#      SECRET_KEY_BASE: "${SECRET_KEY_BASE}"
    volumes:
      - /share/appdata/snibox/app:/app/public
    networks:
      - internal    

  database:
    image: postgres:10.1-alpine
    volumes:
      - /share/runtime/snibox/db:/var/lib/postgresql/data
    networks:
      - internal

networks:
  traefik_public:
    external: true
  internal:
    driver: overlay
    ipam:
      config:
        - subnet: 172.16.198.0/24

Temporary Workaround

I believe the problem is that there is no VOLUME specified. I was only able to (partially) workaround by using this Dockerfile:

FROM ruby:2.5.1-alpine3.7

RUN apk add --no-cache \
    git \
    build-base \
    tzdata \
    nodejs \
    yarn \
    sqlite-dev \
    bash \
    postgresql-dev

WORKDIR /app

ARG GIT_HASH
ENV GIT_HASH ${GIT_HASH:-2dad2bb572aa458760decde5320c382b3080a22e}

ENV RAILS_ENV development
ENV RACK_ENV development
ENV NODE_ENV production

RUN git clone https://github.com/snibox/snibox.git /app && cd /app && git reset --hard $GIT_HASH

COPY Gemfile ./
RUN gem install bundler && bundle install
COPY database.yml ./config/
COPY application.rb ./config/

VOLUME /app/db/database

RUN bin/rake assets:precompile
RUN bin/rails db:migrate

EXPOSE 3000

ENTRYPOINT ["bundle", "exec"]
CMD ["rails", "server", "-b", "0.0.0.0"]

And the docker-compose.yml:

version: '3'

services:
  snibox:
    image: gkoerk/snibox:latest
    volumes:
      - /share/appdata/snibox:/app/db/database
    networks:  
      - traefik_public
    deploy:
      labels:
        - traefik.frontend.rule=Host:code.gkoerk.com
        - traefik.docker.network=traefik_public
        - traefik.port=3000

networks:
  traefik_public:
    external: true

While this configuration allows me to bind to the host and thus retain my data, after initial deploy I must connect to the running docker container and execute bin/rails db:migrate.

Thanks for the details. I have found few things which looks suspicious to me:

  1. You are running 3 containers: snibox/nginx-puma (a.k.a. frontend), gkoerk/snibox:latest (a.k.a. backend) and postgres:10.1-alpine (a.k.a. database). I see that backend is configured as dependency for frontend, but your database isn't required by anything, so backend container won't be able to connect to the database, as they won't be linked.

  2. I have no clue on what frontend container is intent to do here as I cannot see neither Dockerfile or any description of that image. In my understanding nginx should only proxy HTTP requests to the rails server, hence require only proper configuration file, but presumably nothing should be mounted in that container apart from the config.

  3. Rails should be serving static files by default, so I'm not sure why would you need to mount anything to /app/public? What is the purpose for that?

/app/public folder will contain all the JS files compiled by following command specified in your Dockerfile.

RUN SECRET_KEY_BASE=docker ./bin/rake assets:precompile && ./bin/yarn cache clean

In my case /app/public is 4.8Mb, so once you bind empty partition to /app/public, rails will start throwing errors as it cannot find required files to serve static pages (which is enabled).

If you want to use PostgreSQL database you need to link it to the backend container in your composer configuration.

And I believe that you don't need to mount any static files to your snibox container as snibox isn't actually saving any data during its work apart from the database. Can you please elaborate on why you need to store static files outside the container?

The workaround you mentioned is a Dockerfile initially created by me to run snibox with sqlite3 database. Your modifications are creating sqlite3 database inside the container, so when you bind empty volume to /app/db/database docker will replace folder with existing database with empty one so you need to run docker exec -ti <container> rails db:migrate in order to create new database file. Once created you won't need to run db:migration anymore.

However if you require automatic database createtion you could replace default CMD container parameter with something simillar to your backend.command parameter:

snibox:
  command: sh -c ".bin/rails db:migrate RAILS_ENV=development && rm -rf tmp/pids && ./bin/rails s -p 3000 -b '0.0.0.0'"

So you won't need to create database when running container.

That docker-compose is also from you. It's linked from the docker instructions and is located here:
https://github.com/snibox/snibox-docker/blob/master/docker-compose.yml

By the way - this is a great piece of software. Thanks for your work on it!!

By the way,

If you want to use PostgreSQL database you need to link it to the backend container in your composer configuration.

This is not the case. In fact, linking containers is deprecated in docker-compose version 3 (and not recommended in version 2 anymore). Containers in the same stack share a namespace - they can communicate with one another by service name if on the same overlay network (even to ports that are exposed in the image but not mapped in the docker-compose). That is how docker recommends you now link containers.

I see that backend is configured as dependency for frontend, but your database isn't required by anything, so backend container won't be able to connect to the database, as they won't be linked.

Actually, the depends_on is supported in a traditional docker-compose up -d, but not when used via docker stack deploy <stack-name> -c docker-compose.yml (swarm mode). It really doesn't do much even in traditional compose mode. It doesn't ensure a service is healthy, just that it has been started. There is no supported way currently in any version of docker to explicitly control either startup order or how long a given service waits after another to start.

The workaround you mentioned is a Dockerfile initially created by me to run snibox with sqlite3 database. Your modifications are creating sqlite3 database inside the container, so when you bind empty volume to /app/db/database docker will replace folder with existing database with empty one so you need to run docker exec -ti rails db:migrate in order to create new database file. Once created you won't need to run db:migration anymore.

However if you require automatic database createtion you could replace default CMD container parameter with something simillar to your backend.command parameter:

Ultimately I need either a pre-built docker image which initiates the DB if it doesn't exist on first start, or to create my own as you've suggested. I presume the `it's not a problem when restarting the stack. How do you initially create and persist your DB data when rebooting the server?

I suppose sqlite3 is fine for now since this will be be used at first by only 1-3 people. But I really want to get a docker stack that can run both snibox and postgresql in one docker-compose.yml for Docker Swarm. Now that I know nothing needs to be bind mounted, I'll stop trying! That may be the solution to the whole problem.

By the way - this is a great piece of software. Thanks for your work on it!!

Hm, first of all, please note that I'm not a developer of the snibox, so have no idea on how this composer file should work at all. However the second Dockerfile you mention is the one I'm using to host my own instance of snibox

Containers in the same stack share a namespace

That depends on the network type. With ovelay network and swarm I believe you right.

How do you initially create and persist your DB data when rebooting the server?

sqlite3 database is basically the file. Once being created it binds to the container and everything works fine. Doesn't matter if you restart your container as it stores on host storage. Basically I was keen to remove an extra layer with PG host.

I suppose sqlite3 is fine for now since this will be be used at first by only 1-3 people.

Currently you cannot register more than 1 user in single snibox installation, that was basically why I make an image with sqlite3. If you want to host 3 different databases from different snibox installations in the same place then PostgreSQL makes sense.

That of course depends on requirements but in my understanding you only need to create snibox container and postgres container to make snibox works. Nginx will be only required if you need SSL terminaton or some specific proxy configuration. Snibox itself doesn't require anything to be mounted to it, however PostgreSQL database files should be stored outside the container unless you want to loose all the data between container reboots\relocations.

Try to completely remove frontend from your compose file, from backend service remove volumes, add port exposure 80:3000 and check if that would work.

All in all I believe that your problem is more docker related rather than snibox, so we probably need to move our discussion to where its proper.

I was able to run snibox with postgresql using following configuration:

docker-compose.yml

version: '3'

services:
  snibox:
    build: .
    ports:
      - "80:3000"
    image: compose-snibox
    command: sh -c 'sleep 10 && bin/rails db:migrate && rm -rf tmp/pids && bin/rails s -p 3000 -b "0.0.0.0"'
    depends_on:
      - "database"
    environment:
      RAILS_ENV: production
      SECRET_KEY_BASE: <paste_your_key>
      DB_NAME: snibox
      DB_USER: <db_user>
      DB_PASS: <db_pass>
      DB_HOST: database

  database:
    image: postgres:11-alpine
    volumes:
      - /path/to/postgresql/db/data:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: <db_pass>
      POSTGRES_USER: <db_user>
      POSTGRES_DB: snibox

Dockerfile for snibox

FROM ruby:2.5.1-alpine3.7

RUN apk add --no-cache -t build-dependencies \
    build-base \
    postgresql-dev \
    git \
    tzdata \
    nodejs \
    yarn

WORKDIR /app

ARG GIT_HASH
ENV GIT_HASH ${GIT_HASH:-2dad2bb572aa458760decde5320c382b3080a22e}

ENV RAILS_ENV production
ENV RACK_ENV production
ENV NODE_ENV production
ENV SECRET_KEY_BASE <paste_your_key>

RUN git clone https://github.com/snibox/snibox.git /app && cd /app && git reset --hard $GIT_HASH

RUN gem install bundler && bundle install --deployment --without development test

RUN bin/rake assets:precompile && bin/yarn cache clean

I also added sleep 10 to the snibox CMD so snibox will give enough time for postgres to start before doing rails db:migrate. That's rather a dirty hack and probably should be wrapped into proper startup script, however it works.

I get the following error when building the image on two different servers. Any chance you have the image you were able to generate on Docker Hub? :-)

Step 12/12 : RUN bin/rake assets:precompile && bin/yarn cache clean
 ---> Running in d2958a58a740
yarn install v1.3.2
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents@1.2.3: The platform "linux" is incompatible with this module.
info "fsevents@1.2.3" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning "@rails/webpacker > postcss-cssnext@3.1.0" has unmet peer dependency "caniuse-lite@^1.0.30000697".
warning "@rails/webpacker > webpack-assets-manifest@3.0.1" has unmet peer dependency "webpack-sources@^1.0".
warning " > vue-loader@15.0.7" has unmet peer dependency "css-loader@*".
warning " > webpack-dev-server@3.1.4" has unmet peer dependency "webpack@^4.0.0-beta.1".
warning "webpack-dev-server > webpack-dev-middleware@3.1.3" has unmet peer dependency "webpack@^4.0.0".
[4/4] Building fresh packages...
Done in 18.39s.

I also see those warning messages, but because they are not errors you can ignore them. Snibox is working fine after that.

Oh - in that case something is still wrong. I get the following:

image

I've updated the snibox-sqlite image in order to add automatic DB creation, so you can use it instead.