ItsEcholot / ContainerNursery

Puts Docker Containers to sleep and wakes them back up when they're needed

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Stability Issues when having more containers ?

accforgithubtest opened this issue · comments

commented

I have managed to get a basic setup working with container nursery, and everything is working as expected with the initial basic tests, so I proceeded to add about a dozen containers.

Now I started seeing some weird issues where I am infinitely stuck on the "waking up container" page. Worst part is once this happens, I am unable to get to any of the other application containers UI proxied via ContainerNursery (not just the one container where the issue first started). The only solution to this seems to be to restart ContainerNursery manually. Then everything is back to normal.

Is anyone else facing such issue ?

I have spent time manually verifying each and every application / container config, and my set up is working fine for every application configured.
Yet this issue randomly starts and seems like it does not resolve itself after a while either. Only solution is to manually restart ContainerNursery.

Though things are working, here is my docker compose and config.yml sample, in case someone spots something wrong -

version: "3.4"

services:

  caddy:
    image: lucaslorentz/caddy-docker-proxy:latest
    container_name: caddy
    restart: always
    environment:
      - CADDY_INGRESS_NETWORKS=default
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /certs:/certs
    ports:
      - 80:80
      - 443:443
      - 443:443/udp
  
  containernursery:
    image: ghcr.io/itsecholot/containernursery:latest
    container_name: containernursery
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ${pwd}/cn/:/usr/src/app/config
    ports:
      - 80
    labels:
      caddy: containernursery.local.host
      caddy.reverse_proxy: "containernursery"
      caddy.tls: "/certs/cert.pem /certs/certkey.pem"
    depends_on:
      - caddy

  dozzle:
    image: amir20/dozzle:latest
    container_name: dozzle
    restart: always
    environment:
      - DOZZLE_NO_ANALYTICS=true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - 8080
    labels:
      caddy: logs.local.host
      caddy.reverse_proxy: "lazytainer"
      # caddy.reverse_proxy: "containernursery"
      caddy.tls: "/certs/cert.pem /certs/certkey.pem"
      lazytainer.group: dozzle

#...
#many other applications
#...

config.yml

proxyListeningPort: 80
- domain:
      - logs.local.host
    containerName:
      - dozzle
    proxyHost: dozzle
    proxyPort: 8080
    timeoutSeconds: 180

#...
#many other applications
#...

I have this with specific services and it's not clear why. The container is started but CN is stuck on "waking up"

Edit: In vim I wrote the config file without changing anything, this forced a reload of config and it's working. I don't know.

This can happen if your service returns a weird response on HEAD requests, which are used to check if the services webserver is ready to accept the connection yet.

For more information on how readiness is defined check the source:

if (res.status === 200 || (res.status >= 300 && res.status <= 399)) {

To further debug issues with services it would be helpful to turn on debug logging level.

Thanks for this information! I'll try debug mode if it happens again. I find it happens on Linkding and Shiori.

Confirmed that this returns an expected 405, because the service doesn't know what that is:

image

Would you be open to either allowing 405 in the list of responses or perhaps a configurable option?

I can add a patch, if so.

I think I would rather add an option to switch the HTTP Request Type per Service from HEAD to GET. This should fix most of these problems without relying on the service to produce the correct error code.

I will gladly accept a PR for this if you find the time, otherwise I'm sure I will get around to it sometime.

This should be fixed in 1.8.0, thanks @Howard3 for the PR.