ttionya / vaultwarden-backup

Backup vaultwarden (formerly known as bitwarden_rs) SQLite3/PostgreSQL/MySQL/MariaDB database by rclone. (Docker)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Perform an ad-hoc backup with command line

danielporto opened this issue · comments

Thanks for the nice tool.
I noticed that it is possible to perform a restore on demand with some command line parameters. However, it is not possible to force a backup.

I would appreciate it if that was a feature too.
Currently, that could be done by overriding the entrypoint and running a very long command line.

Ideally, we could pass the source and destination mount points and some options either via envs ou parameters (avoiding at least the hack to override the entrypoint)
thanks

Just to add to my issue, here is an example of how to run the backup command immediately:

	docker run --rm \
		--mount type=volume,source=${BITWARDEN_VOLUME},target=/bitwarden/data/ \
		--mount type=bind,source=${LOCAL_VOLUME},target=/bitwarden/bkp/  \
		--env RCLONE_REMOTE_NAME=local \
		--env RCLONE_CONFIG_LOCAL_TYPE=local \
		--env RCLONE_REMOTE_DIR=/bitwarden/bkp \
		-w /app \
		--entrypoint=/bin/bash \
		-it ttionya/vaultwarden-backup:${BITWARDENBACKUP_IMAGE_VERSION} backup.sh 

BITWARDEN_VOLUME corresponds to the docker-compose volume defined for the bitwarden container.
LOCAL_VOLUME corresponds to the local path where the backup must be saved
BITWARDENBACKUP_IMAGE_VERSION corresponds to the image tag in dockerhub. worked with: 1.14.1
attention to these two variables:
--env RCLONE_REMOTE_NAME=local
--env RCLONE_CONFIG_LOCAL_TYPE=local
the first one define the name of the remote share. this name is used in the variable config, capitalized.
ex: RCLONE_REMOTE_NAME=myshare -> RCLONE_CONFIG_MYSHARE_TYPE
the type is "local" to indicate is the local fs.

other envs can be added to customize the backup.

                # --env ZIP_ENABLE=${BITWARDENBACKUP_ZIP_ENABLE} \
		# --env ZIP_PASSWORD=$BITWARDENBACKUP_ZIP_PASSWORD \

@danielporto ,

Interesting idea.

The backup tool doesn't support forced backups because it uses cron to perform backup operations periodically and we don't recommend triggering it with external actions. So there is intentionally no support for backup commands like docker run ... backup ... .

Also, the backup program just outputs the backup file, like backup.zip file, and then lets rclone upload the backup file to the remote storage according to the configuration. The backup program does not need to know where you are backing up to with rclone at all, be it s3, onedrive, googledrive or local, it just performs rclone copy backup.zip REMOTE. And the rclone configuration file is also generated and maintained by rclone itself, it is not appropriate to let the backup program handle the rclone configuration.

In practice, of course, it is sometimes necessary to perform a backup immediately after starting the container. It is possible that the configured cron may be a long time away from the current time and you need to test if the backups and message notifications are working properly. Of course I prefer to modify CRON and specify a few minutes in the future, so that I can also test if supercronic (a replacement for crond) is working properly. And the reason I didn't add an environment variable like BACKUP_IMMEDIATE is #53 issue. Normally, one would make sure the backup program works the first time it is run. But executing the backup immediately after starting the container makes people think the backup tool is ready to run properly, and ignore timing issues that might affect cron triggering. This is dangerous, and it is likely that when you need to use the backup restore you will find that the backup tool is not working properly at all.

These are just a few reasons why I don't support forced backups, but of course if you really need to perform backups manually, modifying entrypoint is a viable method.

Welcome to discuss this issue.

@ttionya, these are good points.
Let me add a bit more.
One of the issues I have is the number of backups I need to manage and the different ways the are done. Sure thing we want to ensure they are consistent. Thus, the usage of a specifically designed and maintained tool outweighs custom scripts which tends to be outdated.

Nevertheless, a centralized backup tool such as Kopia (https://kopia.io) can provide a centralized view of all backups. Verify their consistency, failed jobs... etc.
Instead of reinventing (most of the) wheel (particularly the backup management part), it can be nice to integrate the backup tool to it. However, since kopia is responsible for the scheduler, a simpler interface suffices to integrate and bypass some current functionalities.

Sure, for those where a simple cron + uploader are enough, are well served with vaultwarden-backup. My suggestion is to make it more flexible. Just to allow integrations with other tools that can extend its functionality.

(it is not difficult to override the entrypoint as I demonstrated, just looks a little an unnecessary hack that may stop working eventually due to lack of support)

A "direct execution" option would definitely be appreciated; I use podman and systemd to manage my containers, and being able to use a systemd.timer for scheduling backups would be great.