RangerMauve / hyperdrive-publisher

CLI for publishing a new change to your hyperdrive and syncing it with remote peers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Single process locking

martinheidegger opened this issue · comments

When hyperdrive-publisher gets started as github action, it may very well be possible that (by accident, misconfiguration or parallelization) the script gets started at the same time twice. As far as I can see this may cause forks in the hypercore (corrupt the data) and that is not such a good thing.

I can a few ways to mitigate this issue and I am wondering how relevant each would be.

  • Add a lock file to the file system that prevents parallel starts on a OS/process level.
  • Add a feature to the pinning host/server (in conjunction with #11) that prevents them to connect two two seeding pees at the same time.
  • Before syncing of the file system: connect to the dht to see if there is another peer that also possesses the secret key.

It feels like the third one is most viable.

First one could be an issue because the publisher can be running between different devices.

Second one might be hard because we'd need pinning servers to opt in which would limit which seeding services could be used.

Third option could be done via the mutable DHT methods and a generated key. :o

One could PUT when starting, and somehow undo that PUT once finished. Before you start, check if something is in proccess, and poll for when it'll be unlocked? 🤷