spine directory and crontab empty after docker-compose down and restoring
kevburkett opened this issue · comments
Using the cacti_single_install.yml docker-compose example. When bringing down/removing the containers with docker-compose down
and restoring with docker-compose up -d
the /spine/ directory is empty and original contents of the /etc/crontab is lost. I don't see any errors in logs so I wont post unless you need it.
Example:
After creating new container with persistent volumes for /cacti and /var/lib/mysql
Before snapshot: /spine directory and /etc/crontab has content.
kevin@p-serv-01:~/docker/cacti-test$ docker exec -it cacti-test_cacti_1 /bin/bash
[root@b916fb282470 /]# ls /spine/
bin etc share
[root@b916fb282470 /]# cat /etc/crontab
*/5 * * * * apache php /cacti/poller.php > /dev/null 2>&1
[root@b916fb282470 /]# exit
exit
Down/remove the containers & restore
kevin@p-serv-01:~/docker/cacti-test$ docker-compose down
Stopping cacti-test_cacti_1 ... done
Stopping cacti-test_db_1 ... done
Removing cacti-test_cacti_1 ... done
Removing cacti-test_db_1 ... done
Removing network cacti-test_default
kevin@p-serv-01:~/docker/cacti-test$ docker-compose up -d
Creating network "cacti-test_default" with the default driver
Creating cacti-test_db_1 ... done
Creating cacti-test_cacti_1 ... done
The /spine directory and /etc/crontab is empty.
kevin@p-serv-01:~/docker/cacti-test$ docker exec -it cacti-test_cacti_1 /bin/bash
[root@5a93e723b394 /]# ls /spine/
[root@5a93e723b394 /]# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
[root@5a93e723b394 /]# exit
Here is the docker-compose.yml file I used for this test:
kevin@p-serv-01:~/docker/cacti-test$ cat docker-compose.yml
version: '2'
services:
cacti:
image: "smcline06/cacti"
ports:
- "8008:80"
#- "443:443"
environment:
- DB_NAME=cacti_master
- DB_USER=cactiuser
- DB_PASS=cactipassword
- DB_HOST=db
- DB_PORT=3306
- DB_ROOT_PASS=rootpassword
- INITIALIZE_DB=1
- TZ=America/Los_Angeles
volumes:
- cacti-data:/cacti
- cacti-backups:/backups
links:
- db
db:
image: "percona:5.7.14"
ports:
- "3306:3306"
command:
- mysqld
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --max_connections=200
- --max_heap_table_size=128M
- --max_allowed_packet=32M
- --tmp_table_size=128M
- --join_buffer_size=128M
- --innodb_buffer_pool_size=1G
- --innodb_doublewrite=OFF
- --innodb_flush_log_at_timeout=3
- --innodb_read_io_threads=32
- --innodb_write_io_threads=16
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
- TZ=America/Los_Angeles
volumes:
- cacti-db:/var/lib/mysql
volumes:
cacti-db:
cacti-data:
cacti-backups:
Thanks,
Kevin
Hey Kevin, thanks for the report and that does not sound expected. Can you tell me what version of docker
and docker-compose
you are running?
It's as if the compose down command is deleting the container instead of just turning it off. If you still have the environment, after you docker-compose down
can you paste the output of docker ps -a | grep cacti
?
I think it's normal that docker-compose down
stops and deletes the container (see reference here).
kevin@p-serv-01:~/docker/cacti$ docker-compose -v
docker-compose version 1.25.3, build unknown
Creating container:
kevin@p-serv-01:~/docker/cacti-test$ docker-compose up -d
Creating network "cacti-test_default" with the default driver
Creating volume "cacti-test_cacti-db" with default driver
Creating volume "cacti-test_cacti-data" with default driver
Creating volume "cacti-test_cacti-backups" with default driver
Creating cacti-test_db_1 ... done
Creating cacti-test_cacti_1 ... done
Running docker-compose down:
kevin@p-serv-01:~/docker/cacti-test$ docker-compose down
Stopping cacti-test_cacti_1 ... done
Stopping cacti-test_db_1 ... done
Removing cacti-test_cacti_1 ... done
Removing cacti-test_db_1 ... done
Removing network cacti-test_default
Containers fully removed:
kevin@p-serv-01:~/docker/cacti-test$ docker ps -a | grep cacti
kevin@p-serv-01:~/docker/cacti-test$
Ah re-reading on those commands this is actually expected.
docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
If your intent is to destroy the environment and on docker-compose up
it will create a new one from scratch then docker-compose down -v
should be used. This will remove the volumes and reinstall everything on the next up
. You can continue to use DOWN and maintain SPINE/CRON files by adding the following volumes to the main cacti portion
From
volumes:
- cacti-data:/cacti
- cacti-backups:/backups
To (have not tested yet)
volumes:
- cacti-cron:/etc/cron
- cacti-spine:/spine
- cacti-data:/cacti
- cacti-backups:/backups
If the intent (I assume this is what you want) to simply turn off the containers and retain its information, then:
docker-compose stop
Stops running containers without removing them. They can be started again with docker-compose start.
is the command you would want to use.
In the meantime, I can add some extra checks to the start.sh
bootup script to validate SPINE and the cron is present. In the event, they are not re-compile and install w/o going through the whole "install cacti" loop
I actually tested earlier using these volumes and it works, but not sure if it's a good idea to be saving the entire /etc directory as a persistent volume.
volumes:
- cacti-data:/cacti
- cacti-backups:/backups
- cacti-spine:/spine
- cacti-etc:/etc
Awesome, that would be great if the start.sh script can do that. It would save having to use these extra volumes. Thanks for the help with this!
This has been updated in the most recent PR. Unfortunately, I did need to add a new volume for /spine
, this was required in the event a user updates the instance utilizing ./upgrade.sh
.
Statically assigning the built-in Spine version could cause version mismatches otherwise.