borgbackup / borg

Deduplicating archiver with compression and authenticated encryption.

Home Page:https://www.borgbackup.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

document pull-like operation

ThomasWaldmann opened this issue · comments

commented

this is a FAQ (by people who have firewalls or want it for other reasons) and some people are evaluating setups with ssh -R (see some posts in #36).

this issue is to collect such setups and if evaluated successfully, add it to the documentation.

note: the debian/ubuntu package description says borg only supports push, maybe that can be removed after this ticket is closed.

so, if you successfully run a pull-like setup, the best thing you can do is to make a pull request that closes this ticket.


💰 there is a bounty for this

Note: to collect the bounty you need to run a reliable pull-like setup, do a pull request for our documentation, documenting the pull-related parts of the setup.

commented

A pull setup that does not involve ssh is to just mount the source filesystems on the machine that runs borg.

For the usecase where the normal push way is problematic because of firewalls etc.

From axion on irc (slightly edited and simplified, so all errors are likely mine):

repo=ssh://${USER}@localhost:${PULL_PORT}${REPO_PATH}/${host}
ssh -R ${PULL_PORT}:localhost:22 ${host}         \
  BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes \
  borg create ${repo}::${archive} /some/path

This way tunnels a ssh connection through an ssh connections so it does have some additional overhead.

Another way would be to use BORG_RSH and a pair of socat instances to avoid one layer of ssh encryption.

commented

@textshell I don't think that BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes should be in there permanently, right?

Also, the repo=...{REPO_PATH}/${host} can be done that way, but is unrelated to pull-mode. Also, REPO_PATH there is rather the path that has the repos as subdirs, not the path of the repository itself.

@UNKNOWN_UNENCRYPTED it depends if PULL_PORT is always the same. If that can be arranged that yes, it should not be in there.

@repo yes that can be simplified. I only did limited editing from axion’s pastebin, but i didn’t want this to get lost again.

I think the BORG_RSH + socat way would be nicer anyway (no ssh in ssh, not dependency on sshd running on the backups server, etc), but a little more complex bash script.

While having documentation for this workaround is great, wouldn't it be better to add this functionality to borg itself? This kind of syntax would be awesome:

$ borg create /path/to/repo::example.com-now user@example.com:/

I agree with @sudoman; it would be useful to know, are there architectural reasons this would be difficult? It feels like this would dramatically increase the number of scenarios for which Borg is a recommendable solution.

We first need to agree on a plan. For example i don‘t like overloading the directory argument on borg create with additional magic. I personally tend to a new sub command.

(Note: server is where the repo is, client is the remote system where the to be backuped data is)

Also as this is not the main use case for borg i think the design should minimize the changes needed. Thus i think the stdin, stdout and stderr of the ssh session should be used for the ui of the borg client on the remote system not for data transfer so that all interaction still works as expected. Thus the repository communication would need to be tunneled with an additional unix socket forward. I’m not sure what to do about borg serve’s stderr. Maybe it‘s ok to just (implicitly) splice that in on the server side.

One way to implement borg pull would be to create unix socket and listen to it, ssh to the to be backuped system to run borg with special options to create that tell it the needed unix socket to connect. The server would then wait for an connection on the unix socket, dup2 these to stdin, stdout and invoke the RepositoryServer.

Still open is how key management is even supposed to work in this scenario. Maybe mandate a keyfile on the client in the default location?
We also need to ensure that there is a good way to secure this with the usual forced command stuff.

This would need minimal changes to the main borg code:

  • RemoteRepository would need to get an option to connect to a unix domain socket via a new option in create.
  • A new pull command needs to be implemented that does the initial setup and then chains into borg serve.

Of course this still requires a borg executable on the client.

So it doesn‘t need any architectural changes but is a lot fiddling with external ssh process interaction and the os module. So in the end i think it‘s a task that is doable for anyone with sufficient motivation and decent python skills.

It's pretty much what I do over at borgcube. I don't mean that as advertisement (wouldn't make any sense for a project that doesn't really work yet, does it?), rather, if someone wants to implement it in their system they can draw inspiration from there - the basics (pulling an archive from a client) work rather well.

You can also gauge how many changes it would likely need to do this; if it is even more tightly integrated into Borg itself it would probably mean much more changes than those presented in borgcube.

Which is the reason why I choose to put that into a separate project; however if someone wants to work toward integrating it into Borg itself I won't interfere of course, since I'm obviously inherently biased here.

I'm reading the ~solution posted by @textshell again, and I'm realizing... this has reintroduces to the pull model one of the issues (well, issue depending on your setup) that I was hoping to avoid from the push model.
,
Consider a scenario where I have a backup machine running on say, my local LAN. I have a lot of backups on it. I don't want the machines I'm backing up from some remote VM hosting server to have access to this machine... the trust is in the backup machine accessing the other machines, not the reverse.

In the scenario being described, it sounds like both machines will have to have access to each other.

@cwebber The client only has access to the borg repository not the whole backup server in the scenario i posted (but that is comparable to the access it would have in push mode with a correctly setup forced command and assuming sshd is not buggy). At least if RepositoryServer is started with restrictions to only allow access to the right borg repository and only via the socket that borg pull creates. (So no direct network or ssh access is needed)
I think doing the chunking and deduplication (and encryption) locally on the client is one of the core parts of borg. On the other hand it would be possible to have a pull script the creates an sshfs tunnel and does those on the server side. But i don‘t think that really needs support in borg, that‘s just a easy script to write, but looses quite some of borgs performance.

FWIW, I have made a small hack which works with socat, thus saving the SSH-in-SSH overhead and obliterating the need for the remote machine to have an account on the local machine. Using --append-only and --restrict-to-path, this should be as safe as Borg is, but I’d like any feedback on that.

First, we create socat-wrap.sh, which we will use as BORG_RSH:

#!/bin/bash
exec socat STDIO TCP-CONNECT:localhost:12345

Locally, we run socat to offer the borg service:

socat TCP-LISTEN:12345,fork \
    "EXEC:borg serve --append-only --restrict-to-path $PATH_TO_REPOSITORIES --umask 077"

(omit the ,fork if you want to allow only exactly one borg command to be run)

Now we invoke borg on the remote using ssh, forwarding the port:

ssh -R 12345:localhost:12345 sourcehost \
    BORG_RSH="/home/horazont/socat-wrap.sh" \
    borg init -e none ssh://foo/$PATH_TO_REPOSITORIES/some_repository

foo is completely arbitrary; one could substitute anything here, because the socat-wrap.sh ignores its arguments.


Of course, it’s also possible to do the same with UNIX sockets, providing more isolation.

socat-wrap.sh:

#!/bin/bash
exec socat STDIO UNIX-CONNECT:/home/horazont/borg-remote.sock
socat UNIX-LISTEN:/home/horazont/borg-local.sock,fork \
    "EXEC:borg serve --append-only --restrict-to-path $PATH_TO_REPOSITORIES --umask 077"
ssh -R /home/horazont/borg-local.sock:/home/horazont/borg-remote.sock sourcehost \
    BORG_RSH="/home/horazont/socat-wrap.sh" \
    borg init -e none ssh://foo/$PATH_TO_REPOSITORIES/some_repository

ssh is friendly enough to automatically set very strict permissions on the socket on the remote side.

commented

@horazont looks good. did you compare performance ssh vs. socat?

Is the socat-wrap.sh needed or could the socat command be used directly in BORG_RSH?

@ThomasWaldmann socat doesn’t like the additional arguments borg is attempting to add. Not sure how to circumvent that.

re performance, I haven’t checked. My main motivation for finding this solution was that I didn’t want to setup an account for the remote to SSH into (even though it should be pretty safe authorized_keys command restrictions). The appeal is that it works out-of-the-box, no configuration on either side needed (the socat-wrapper.sh can be scp’d on demand).

commented

ah, of course. yeah, then such a script is easiest way.

if you have that setup working ok, could you add a section to our docs about it and do a PR against 1.0-maint?

i set up the socat-based solution from @horazont mentioned above, running nightly backups from various locations. i noticed that with larger backup targets, after a couple of days, i reproducably get this error:

Traceback (most recent call last):
 File "/opt/lib/python3.5/site-packages/borg/repository.py", line 72, in __del__
   self.close()
 File "/opt/lib/python3.5/site-packages/borg/repository.py", line 192, in close
   self.lock.release()
 File "/opt/lib/python3.5/site-packages/borg/locking.py", line 298, in release
   self._roster.modify(EXCLUSIVE, REMOVE)
 File "/opt/lib/python3.5/site-packages/borg/locking.py", line 216, in modify
   elements.remove(self.id)
KeyError: (('storage', 31273, 0),)
$LOG ERROR Remote: Received SIGTERM.

after this happens once, the lockfile not having been deleted properly prohibits further backups...

i guess it has to do with one of the connections being closed prematurely?

edit: this error is reported on the server that "pulls" the backup from the client (i can only tell by of the /opt/lib/... location - this setup is pretty confusing to debug).

Maybe socat times out?

-T<timeout>
    Total inactivity timeout: when socat is already in the transfer loop and nothing has happened for <timeout> [timeval] seconds (no data arrived, no interrupt occurred...) then it terminates. Useful with protocols like UDP that cannot transfer EOF. 

Not sure if that's on by default.

hm, i just realized i used kill $SOCAT_PID after the ssh command finished (i'm doing borg prune right after the backup finishes) - i replaced that with wait $SOCAT_PID now, i guess that should fix it...

thanks for the timeout hint, i now enabled socat logging with -lf and -d -d. if it happens again, we'll know for sure if there was a timeout!

it happened again :( no timeout though. here is the borg output:

------------------------------------------------------------------------------
Archive name: home-2017-01-23 05:50:53.078703
Archive fingerprint: 65a8b026c5801c11411d9bc63354517d71bb95bc41b3a8f86838a19c64d6170f
Time (start): Mon, 2017-01-23 05:51:00
Time (end):   Mon, 2017-01-23 05:51:38
Duration: 37.94 seconds
Number of files: 62977
------------------------------------------------------------------------------
                      Original size      Compressed size    Deduplicated size
This archive:                8.28 GB              8.28 GB             31.97 MB
All archives:               98.76 GB             98.76 GB              8.12 GB

                      Unique chunks         Total chunks
Chunk index:                   63962               780503
------------------------------------------------------------------------------
Exception ignored in: <bound method Repository.__del__ of <Repository /share/.../back>>
Traceback (most recent call last):
 File "/opt/lib/python3.5/site-packages/borg/repository.py", line 72, in __del__
   self.close()
 File "/opt/lib/python3.5/site-packages/borg/repository.py", line 192, in close
   self.lock.release()
 File "/opt/lib/python3.5/site-packages/borg/locking.py", line 298, in release
   self._roster.modify(EXCLUSIVE, REMOVE)
 File "/opt/lib/python3.5/site-packages/borg/locking.py", line 216, in modify
   elements.remove(self.id)
KeyError: (('storage', 32244, 0),)
$LOG ERROR Remote: Received SIGTERM.
Failed to create/acquire the lock /share/.../back/lock.exclusive (timeout).

and here's the socat log for that run:

2017/01/23 05:50:50 socat[32239] N listening on AF=2 0.0.0.0:12345
2017/01/23 05:50:53 socat[32239] N accepting connection from AF=2 127.0.0.1:42657 on AF=2 127.0.0.1:12345
2017/01/23 05:50:53 socat[32239] N forking off child, using socket for reading and writing
2017/01/23 05:50:53 socat[32239] N forked off child process 32244
2017/01/23 05:50:53 socat[32239] N forked off child process 32244
2017/01/23 05:50:53 socat[32239] N starting data transfer loop with FDs [7,7] and [6,6]
2017/01/23 05:50:53 socat[32244] N execvp'ing "borg"
2017/01/23 05:51:39 socat[32239] N socket 1 (fd 7) is at EOF
2017/01/23 05:51:39 socat[32239] N exiting with status 0

all looks normal.. but all the days before, when stuff was working fine, there was an additional few lines in the log:

2017/01/22 05:50:49 socat[31968] N listening on AF=2 0.0.0.0:12345
2017/01/22 05:50:52 socat[31968] N accepting connection from AF=2 127.0.0.1:37109 on AF=2 127.0.0.1:12345
2017/01/22 05:50:52 socat[31968] N forking off child, using socket for reading and writing
2017/01/22 05:50:52 socat[31968] N forked off child process 31975
2017/01/22 05:50:52 socat[31968] N forked off child process 31975
2017/01/22 05:50:52 socat[31968] N starting data transfer loop with FDs [7,7] and [6,6]
2017/01/22 05:50:52 socat[31975] N execvp'ing "borg"
2017/01/22 05:51:34 socat[31968] N socket 1 (fd 7) is at EOF
2017/01/22 05:51:34 socat[31968] N childdied(): handling signal 17
2017/01/22 05:51:34 socat[31968] N socket 1 (fd 7) is at EOF
2017/01/22 05:51:34 socat[31968] N socket 2 (fd 6) is at EOF
2017/01/22 05:51:34 socat[31968] N exiting with status 0

note the childdied() : handling signal 17

i'm at a loss.. what's going on here?

The "childdied(): handling signal 17" likely refers to socat itself having received that signal (it's SIGCHLD), meaning that the "borg serve" process started by socat exited.

I'm not really sure what happens here.

Just a hunch: When ending the connection the client closes (implies EOF) the pipes to SSH, which communicates that to sshd which then closes the pipes to the borg serve process, which notices that, closes the repository and exits, which is noticed by sshd and communicated to SSH which returns the exit code reported by sshd which is picked up by the Borg client.

Perhaps this is different with socat. Perhaps socat does not wait for the child process, but just signals (SIGTERM) it when it gets an EOF on it's input socket. Then this would be a race between the borg serve process and socat - whoever first gets some CPU time wins. socat wins: kills borg. borg wins: exits normally.

But, just a hunch.

@horazont @ThomasWaldmann

You can invoke bash directly, instead of socat-wrap.sh:

ssh -R 12345:localhost:12345 sourcehost \
    BORG_RSH="'bash -c \"exec socat STDIO TCP-CONNECT:localhost:12345\"'"
    borg init -e none ssh://foo/$PATH_TO_REPOSITORIES/some_repository

i am still having above symptoms, i am now trying with the unix domain socket version @horazont added above. i did notice that - as far as i can tell - the order of the sockets is reversed in the "-R" ssh option - right? also, ssh leaves the socket file on the backup source host (why?), so i had to add an ssh command before the borg call w/ the forward to just remove the liggering socket file.. i'll report in a month or so if that fixes my issue ;)

commented

@horazont: great solutions, thanks! works so far.
have you found a good way to run your solution with encrypted backups? am i right that the remote host would need to know the passphrase?

commented

When using the remote socket I always had the problem that the remote socket file doesn't get removed automatically, and the next connection cannot create a new socket for forwarding. The behaviour can be changed in the sshd_config of the server by setting the StreamLocalBindUnlink option to yes: https://man.openbsd.org/sshd_config#StreamLocalBindUnlink

commented

is this issue still just to document "workarounds"?

would love to see this as a feature, perhaps like @sudoman suggested above (#900 (comment)) - or similar.

i am currently testing borg as a replacement candidate for rdiff-backup.

but just had to give up trying to do a root partition backup of a small (~1.5GB) linux server installation over sshfs. speed of sshfs using a remote "high" latency connection (25ms, 100Mbit) is just too slow. transfer rate went down to a few kB/s for directories with lots of small files (no cpu or disk-io bottlenecks) 😞

commented

@zyro23 maybe one can't expect good performance for lots of little files on a network filesystem. but if you have reasons to believe that sshfs is unusually slow for this and should be faster, file a bug at sshfs project.

the best and fastest way to run borg is client/server, when borg on the client reads source files locally and then talks via ssh using borg's rpc protocol to a borg process on the backup server, which manages the repo.

commented

yes, thats my current understanding. i commented here because we are doing pull-style backups of "external" hosts to a machine within an internal network (firewalled to allow only outbound connections). thanks for your feedback!

Hello,
based on @horazont's scripts, I wrote https://github.com/Alex131089/bbbs.
While I'm really enthusiastic about @marcpope's BBS announced in #2960 (comment) (which seems to use this mode), I needed a solution before my server eventually crashes, so I wrote this.
If it can be useful to someone else.

what's the latest and greatest here? I see @marcpope bbbs wrapper and @enkore's borgcube (although that seems to do much more than just pull...) both seem to be based on the same socat hack... how brittle is that setup?

Actually borgcube's approach was to implement the Borg wire protocol in a reverse proxy of sorts; the proxy transparently re-encrypted data uploaded (so the client only got the ID key) and transparently created a fake manifest containing no archives (which worked because the server pushed a Borg cache matching that state exactly to the client) and meticulously verified that the client was only doing what it was supposed to be doing (creating an archive, possibly creating checkpoint archives and deleting those).

That's how borgcube managed to tick all these boxes:

  • Lightweight, untrusted clients:
    • They can't manipulate the repository
    • They don't know where the repository is
    • They don't get encryption keys
    • They can't read backups (of other clients and also themselves)
    • They don't need to maintain a Borg cache
    • They don't have to encrypt (if any of the BLAKE2 key types are used)

You speak in the past: is the project still on? :)

No. I just checked and it tells me I didn't even upload everything I did back then (timetrace, secrets, collective-service, opub, newcli, security, tls, schedxhr, modjob, ...); I don't remember which of these I actually meant to merge going forward or were just probes to test the territory.

Interesting approach, yes, sound idea, also yes, but futile if you want to build it based on the Borg package, because the Python API is incredibly unstable. Instead the relevant structures would need to be isolated / rewritten and packaged in a stable way, however, there is the very real risk then that you get breakage whenever Borg adds a new feature and bolts on another workaround/hack relying on minute implementation details of Borg versions past and present (as I did myself many times: while Borg has version fields mostly everywhere, there is usually no real provision for extending without breaking; you'll notice this every- and anywhere you look in Borg).

Borg itself already doesn't manage to take full advantage of a gigabit connection, less so with a proxy (written in Python) in-between: The proxy does handles effectively the same IO load as the repository itself, but has to replicate approximately the same work as borg check (checking all MACs, possibly re-compressing, checking that the structure is sane etc.) at the same time. It's possible to parallelize some of this, especially the heavy-lifting, even in Python — but MP in Python is just one big, unnecessary nuisance.

If I were motivated to start a new revision of this (which I'm not), I would likely start building a better foundation first.

you mean the Borg Python API here? that's unfortunate, to say the least...

what would you suggest a best foundation would be for the pull model? socat seems really like a ugly hack...

Tearing the core data structures and their tests out of Borg and stuffing them in a new package, specifically meant and maintained to provide a sane and stable API. But as I mentioned above, this is easy to do on a "take things and put them someplace else" perspective, but due to Borg development it would be difficult / dangerous to stray too far from the actual, literal implementation of Borg.

I see @marcpope bbbs

The link goes to a different project (b b b s) - the one by marcpope was bbs (borg backup server), but as far as I can see no public repository exists any more.

ah yes, that was not @marcpope but @Alex131089.

@marcpope looks promising. Will your software be OpenSource in the end?

commented

@marcpope please create a new issue for your stuff now and move all interesting content there.

hello everybody,

i tried to solve the pull mode like this: https://github.com/m-osmani/borgbackup-pull.

Maybe this could be a solution for somebody.

regards

commented

@m-osmani if you like, you could write some docs for pull-like operations. Then you would have some docs, documenting what you do with these scripts and we (borg) would have some docs (and could close this ticket) for people not wanting to use ansible.

no problem, my idea is to write a script called "borg-backup-ctl.sh" which will produce server side client scripts which will be triggered by cron. And of course the doc. This will be more ansible independent and would work in a standalone manner.

@ThomasWaldmann can you please give me a short answer in #4085

thx

regards

Maybe there's some reason not to do this, but this worked well for me:

ssh <remote-user>@<remote-ip> "export BORG_PASSPHRASE=\"$BORG_PASSPHRASE\"; export BORG_REPO=\"<local-user>@$(hostname -I | grep -Eo '^[^ ]+'):$BORG_REPO\"; screen -S borgbackup -d -m borg create --one-file-system ::{hostname}-{now:%Y-%m-%d} /"

This kicks off a borg instance on the client through ssh in a screen, so it doesn't hold the server up. This uses the borg client's resources directly, but is kicked off from the server, in the "pull-like" manner that I was looking for. Of course, to make it not return until it is finished, just remove the screen bits. You can also add --progress and stuff to your hearts content and it will either go to screen or be sent back to the server depending on whether you use screen.

@ericbf afaics this requires ssh credentials for the access back to the originating machine on the remote-ip. The benefit of a pull-like operation would be that a compromised machine can't erase remote backups.

Yes, I'm using SSH keys, but as @marcpope said, you can disable it before and after it kicks off. There would still be a small window of access, but very small.

@marcpope you can issue a "borg purge" on the compromised host; or is there a way to limit the borg operation via authorized_keys file?

Unless I'm blind, I don't think anyone spoke about the fact a complete pull system with sshfs started before Borg is doable, without a root login (specific sudo right on the remote target is required).
The trick lies with -o sftp_server and sudo :

sshfs user@host:/  /local/mount/dir  -o ro -o sftp_server="sudo /usr/lib/openssh/sftp-server"

Adjust sftp_server arg to the sshd_config subsystems entry.

To have this working, you'll need :

  1. a dedicated user on the remote server. It can be a system user without password, but a home and shell are required. No specific group or rights aside this file in the sudoers (adjust the username) :
# sudoers file : /etc/sudoers.d/borg
borg ALL=NOPASSWD:/usr/lib/openssh/sftp-server
  1. this user will also need in his ~/.ssh directory the public key of the user running Borg on the backup server.

Now try to connect to the target server with ssh, and retry with sshfs. You'll see all files can be accessed, due to sftp-server running as root.
Borg can now start to backup the remote server using the mount point.

Only limitation for now is the fact the backup will have inside the full path of the mount point. And this will also need to be set as a prefix on all paths to backup and exclude.
For example : borg create ... repo::backup-set /mount/point/etc /mount/point/boot /mount/point/home /mount/point/usr --exclude /mount/point/usr/cache/

commented

Of course, it’s also possible to do the same with UNIX sockets, providing more isolation.

socat-wrap.sh:

#!/bin/bash
exec socat STDIO UNIX-CONNECT:/home/horazont/borg-remote.sock
socat UNIX-LISTEN:/home/horazont/borg-local.sock,fork \
    "EXEC:borg serve --append-only --restrict-to-path $PATH_TO_REPOSITORIES --umask 077"
ssh -R /home/horazont/borg-local.sock:/home/horazont/borg-remote.sock sourcehost \
    BORG_RSH="/home/horazont/socat-wrap.sh" \
    borg init -e none ssh://foo/$PATH_TO_REPOSITORIES/some_repository

ssh is friendly enough to automatically set very strict permissions on the socket on the remote side.

Thanks! I used this reverse unix socket forwarding to backup a remote server. I didn't want to use static reverse port and I could not figure out how to catch dynamic port, also using unix sockets offers better isolation as default mask is 01777 and thus other uses can't try to access it. People who would like to use it just check AllowStreamLocalForwarding, StreamLocalBindUnlink options in sshd_config(5).

A new round of fun with pull-like operation.

I wrapped the pulling side in systemd units:

borg-remote-repositories.socket

[Unit]
Description=Socket for accessing a specific path as borg repositories

[Socket]
ListenStream=/data/test/borg.sock
Accept=yes

borg-remote-repositories@.service

[Unit]
Description=Borg serve

[Service]
Type=simple
ExecStart=/usr/bin/borg serve --append-only --restrict-to-path /data/test/repos/ --umask 077
StandardInput=socket
StandardOutput=socket
StandardError=journal
User=remote-backups
Group=remote-backups
ProtectSystem=strict
PrivateTmp=yes
PrivateNetwork=yes
PrivateDevices=yes
ProtectKernelTunables=yes
RestrictAddressFamilies=
ReadWritePaths=/data/test/repos/

This makes the borg serve:

  • run under its own user (remote-backups -- make sure that user has rwx permissions on /data/test/repos and everything therein)
  • have ~no privileges on the system: no network access, no device access, no access to a shared tmp, no write access to the system etc.
  • be able to run multiple times, once for each client connecting to the socket

To execute a backup, one can use for example:

ssh -R /root/borg.sock:/data/test/borg.sock root@remote-host BORG_RSH="'bash -c \"exec socat STDIO UNIX-CONNECT:/root/borg.sock\"'" borg create -p ssh://remote/data/test/repos/remotely-created::postgres-$(date --iso-8601=seconds) /var/lib/postgresql-backups/ ';' rm /root/borg.sock

The rm /root/borg.sock helps with cleanup in case the remote server cannot be configured to do StreamLocalBindUnlink.

(Of course, you’d normally not use root but instead a user with sudo privileges for exactly the required borg create commands.)

commented

@fantasya-pbem did you see this ticket / bounty?

Yeah, I find it quite difficult to go through all these comments and get the essence of what could be called a general recipe to to it. And I don't have experience with pull-like operations. I'll follow this issue and maybe one day find time to write some docs from it.

commented

@fantasya-pbem ok, thanks. guess one needs to actually try it and in parallel write complete / consistent docs.

While having documentation for this workaround is great, wouldn't it be better to add this functionality to borg itself? This kind of syntax would be awesome:

$ borg create /path/to/repo::example.com-now user@example.com:/

Was this ever implemented?

I managed to get a pull setup running, turns out it looks extremely similar to @horazont's approach ;-) I also use socat to redirect from/to a unix domain socket. I baked all in two shell scripts (one on the pull-side, one on the machine to be backupped) - no systemd, "borg serve" is only spun up during the actual backup process.

A first step to streamline this approach would be a modification of borg to get rid of the socat workaround, i.e. redirecting stdin/stdout to a unix domain socket (sounds similar to #4749). This clould look like the following:

  • Add an optional parameter --socket /path/to/socket
  • "borg serve" would use this socket
  • Add an extension for the repo URI: socket://path/to/repo/on/remote - this would use the socket passed via --socket and use it for communication

@ThomasWaldmann would you be interested in code changes implementing this? Or are you aiming for a more comfortabe "all-in-one solution" which would seamlessly integrate the push like in the comment above by @binaryplease?

commented

@Skyr I'ld like to see a solution that does not need major modifications or additions to the RPC code (remote.py). That code is fragile, performance critical and not easy to debug.

Is this resolved? Seems like all @horazont needs to do is open a PR.

commented

BTW latest OpenSSH added support for remote/local unix socket forwarding tokens, see https://bugzilla.mindrot.org/show_bug.cgi?id=3014

I'm creating a daemon to automate backups for me (need to learn go, and this fits it). Eventually, I'd like to be able to run the daemon on a server, that will tell my client to start a backup if one hasn't been done recently.

Does anyone see any issues doing this? Anyone interested in a similar thing?

commented

closing due to #5230.

commented

@BenediktSeidl solved this, but wants to give the bounty to the borg project thanks!

#5150 (comment)

So, I will claim it and transfer the funds back to borgbackup org, so they can be used for future bounties.

commented

Now claimed USD 50 bounty and transferred back to borgbackup org, https://www.bountysource.com/orders/119860?receipt=1

This method requires only passwordless access from borg-server to borg-client. Not ssh-in-ssh.

Do this once on borg-server:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod go-w ~/.ssh/authorized_keys

Execute pull operation on borg-server:

(
  eval $(ssh-agent) > /dev/null
  ssh-add -q
  ssh -A borg-client "borg init -e none --rsh 'ssh -o StrictHostKeyChecking=no' $(id -un)@borg-server:repo"
  kill "${SSH_AGENT_PID}"
)
commented

@tombyman commenting on a closed issue / merged PR might be not the best way to push this.

So, maybe better open a new issue. Describe what the issue is and if you have a solution, make a PR that fixes the issue?

Created issue #5287 and PR #5288.