lucid-kv / lucid

High performance and distributed KV store w/ REST API. 🦀

Home Page:https://clintnetwork.gitbook.io/lucid/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Manifest] Persistence

imclint21 opened this issue · comments

Hey,

I think that we can do two kinds of persistence:

  1. An iteration system, when each key is updated, the key and his value is stored.
  2. A snapshot system, at each time range (defined in configuration file), all the keys are stored.

Comment if you have any idea

A common and high performance way to do this is to save the entry when it expires and on a GET you check the disk storage if it isn't in the map and then load it.

Otherwise you just return None.

@CephalonRho, Maybe we can move KvStore instantiation from the Server to Lucid impl, in this way it could be easier to make some operations on the KvStore from subcommands and also facilitate persistence actions, what do you think?

And for the persistence, do you have an idea of how we can achieve this?

Hi,

Two interesting libs for persistence:

And a cool snippet to make write disk persistence, binary file writing and also compression!

Best regards

CC @CephalonRho

Hi,

Found this project randomly. Seems like a cool project to a contribute to. Have researched this exact topic before.

Some links:
https://redis.io/topics/persistence
https://redislabs.com/blog/hood-redis-enterprise-flash-database-architecture

The main approach from my understanding:

  • All data is kept in memory.
  • Data is stored to disk so that it can be restored in the case of failure.
  • The most favorable implementation of the persistence depends heavily on the load.

Hey @halvorboe,

I really enjoy your motivation, for Redis, I already look quickly, I will read deeply tonight.

For contributing no problem, you are welcome!

Hi @halvorboe

@Slals is working on persistence if you want to join us you are welcome!

Hey guys,

Thanks for your links.

Typically Redis uses two ways to persist the data :

  • RDB files (.rdb) for snapshot : It bumps data snapshot in RDB every x minutes. It's purpose is to backup the data and even push them to remote server ;
  • AOF (append only file) : for actual persistence in case of system failure / reboot. Since it's only appending to a file, writing in it is fast and made for every new state in memory.

RDB is slow to write but faster to initiate, that is why this is used for snapshot only made every x data write or x minutes.

AOF is fast to write and not prone to corruption because of the "only appending" thing. There are some cases where the append is not fully completed when the disk is full, this issue has been adressed by Redis so we should.

Typically in AOF are written all operations in order to be re-instanciated in case of server failure (operations are executed and data load in memory). The commands look like this https://redis.io/commands/brpoplpush

I suggest a first implementation that builds a AOF system and think about a long term snapshot later on. The real disadvantage of AOF for snapshot is that to be load the server has to execute every operations which can take some times for long AOF.

If you have any thoughts and opinion about that go ahead guys! @CephalonRho @clintnetwork @halvorboe

commented

AOF would probably be the easiest to implement well, but in my opinion the biggest problem is it's unlimited growth. A key being created and deleted very often would mean lots of wasted space. Redis seems to handle this by providing the option to rewrite the file when required, but that can cause sudden hangs and compromises the reliability of the format.

An RDB-like file format doesn't necessarily have to be slow to write, I guess it's just because it's always supposed to be as compact as possible since it's a snapshot and doesn't represent the current state of the database.

A mixed approach is also possible and might even be necessary if we want to have replication sometime in the future since recent transactions have to be synchronized across servers, which would be impossible if a server forgets about it during a restart.

Thanks @CephalonRho

Could you elaborate more on what you envision for "a mixed approch"? Are you talking about a compact as RDB written as AOF? I guess we still will encounter unlimited growth issue, even if it growth lesser than commands as AOF.

Do you guys know why for sure writting is slower with RDB?I think it processes some calculus for building relational schemes and write it, but it's just a guess.

I totally agree for the replication, we have to design something that will be used for it.

@CephalonRho could you explain what do you mean by a mixed approach?

We need to provide an easy way of persistence and not many modes, it needs to be simple in my opinion.

commented

By "a mixed approach" I was referring to simply putting an append-only format in front of an RDB-like format. Transactions could be written to both the append-only file and the RDB-like file.
The transactions in the append-only file can be deleted when they're not required anymore (replicated to another instance and written to another file), which wouldn't actually make it append-only anymore but instead allows it to work more like a growable ringbuffer. That should keep it's size small while still ensuring that no data gets lost.

Got you.

The way I understand it does not really differ from Redis approach. They use RDB for snapshot executed at specified intervals. They don't clear AOF though. https://redis.io/topics/persistence

Would we really gain value from mixing both, thus writting transactions on RDB every time the AOF is considered too big, then flush it / or start a new AOF?

I see one good value with this approach is that we stick with one persistence method which is saving transactions, it makes things simpler for end-users.