Iziren / libmdbx

Really fast memory-mapped key-value storage engine

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

libmdbx

Revised and extended descendant of Symas LMDB.

The Future will be positive. Build Status Build status Coverity Scan Status

Project Status for now

  • The stable versions (stable/0.0 and stable/0.1 branches) of MDBX are frozen, i.e. no new features or API changes, but only bug fixes.

  • The next version (devel branch) is under active non-public development, i.e. current API and set of features are extreme volatile.

  • The immediate goal of development is formation of the stable API and the stable internal database format, which allows realise all PLANNED FEATURES:

    1. Integrity check by Merkle tree;
    2. Support for raw block devices;
    3. Separate place (HDD) for large data items;
    4. Using "Roaring bitmaps" inside garbage collector;
    5. Non-sequential reclaiming, like PostgreSQL's Vacuum;
    6. Asynchronous lazy data flushing to disk(s);
    7. etc...

Don't miss libmdbx for other runtimes.

Runtime GitHub Author
JVM mdbxjni Castor Technologies
.NET mdbx.NET Jerry Wang

Nowadays MDBX intended for Linux, and support Windows (since Windows Server 2008) as a complementary platform. Support for other OS could be implemented on commercial basis. However such enhancements (i.e. pull requests) could be accepted in mainstream only when corresponding public and free Continuous Integration service will be available.

Contents

Overview

libmdbx is an embedded lightweight key-value database engine oriented for performance under Linux and Windows.

libmdbx allows multiple processes to read and update several key-value tables concurrently, while being ACID-compliant, with minimal overhead and operation cost of Olog(N).

libmdbx provides serializability and consistency of data after crash. Read-write transactions don't block read-only transactions and are serialized by mutex.

libmdbx wait-free provides parallel read transactions without atomic operations or synchronization primitives.

libmdbx uses B+Trees and mmap, doesn't use WAL. This might have caveats for some workloads.

Comparison with other DBs

Because libmdbx is currently overhauled, I think it's better to just link chapter of Comparison with other databases here.

History

The libmdbx design is based on Lightning Memory-Mapped Database. Initial development was going in ReOpenLDAP project, about a year later it received separate development effort and in autumn 2015 was isolated to separate project, which was presented at Highload++ 2015 conference.

Since early 2017 libmdbx is used in Fast PositiveTables, by Positive Technologies.

Acknowledgments

Howard Chu (Symas Corporation) - the author of LMDB, from which originated the MDBX in 2015.

Martin Hedenfalk martin@bzero.se - the author of btree.c code, which was used for begin development of LMDB.

Main features

libmdbx inherits all keys features and characteristics from LMDB:

  1. Data is stored in ordered map, keys are always sorted, range lookups are supported.

  2. Data is mmaped to memory of each worker DB process, read transactions are zero-copy.

  3. Transactions are ACID-compliant, thanks to MVCC and CoW. Writes are strongly serialized and aren't blocked by reads, transactions can't conflict with each other. Reads are guaranteed to get only commited data (relaxing serializability).

  4. Reads and queries are non-blocking, don't use atomic operations. Readers don't block each other and aren't blocked by writers. Read performance scales linearly with CPU core count.

Though "connect to DB" (start of first read transaction in thread) and "disconnect from DB" (shutdown or thread termination) requires to acquire a lock to register/unregister current thread from "readers table"

  1. Keys with multiple values are stored efficiently without key duplication, sorted by value, including integers (reasonable for secondary indexes).

  2. Efficient operation on short fixed length keys, including integer ones.

  3. WAF (Write Amplification Factor) и RAF (Read Amplification Factor) are Olog(N).

  4. No WAL and transaction journal. In case of a crash no recovery needed. No need for regular maintenance. Backups can be made on the fly on working DB without freezing writers.

  5. No custom memory management, all done with standard OS syscalls.


Improvements over LMDB

  1. mdbx_chk tool for DB integrity check.

  2. Automatic dynamic DB size management according to the parameters specified by mdbx_env_set_geometry() function. Including including growth step and truncation threshold, as well as the choice of page size.

  3. Automatic returning of freed pages into unallocated space at the end of database file with optionally automatic shrinking it. This reduces amount of pages resides in RAM and circulated in disk I/O. In fact libmdbx constantly performs DB compactification, without spending additional resources for that.

  4. Support for keys and values of zero length, including sorted duplicates.

  5. Ability to assign up to 3 markers to commiting transaction with mdbx_canary_put() and then get them in read transaction by mdbx_canary_get().

  6. Ability to update or delete record and get previous value via mdbx_replace() Also can update specific multi-value.

  7. LIFO RECLAIM mode:

    The newest pages are picked for reuse instead of the oldest. This allows to minimize reclaim loop and make it execution time independent of total page count.

    This results in OS kernel cache mechanisms working with maximum efficiency. In case of using disk controllers or storages with BBWC this may greatly improve write performance.

  8. Sequence generation via mdbx_dbi_sequence().

  9. OOM-KICK callback.

    mdbx_env_set_oomfunc() allows to set a callback, which will be called in the event of DB space exhausting during long-time read transaction in parallel with extensive updating. Callback will be invoked with PID and pthread_id of offending thread as parameters. Callback can do any of these things to remedy the problem:

    • wait for read transaction to finish normally;

    • kill the offending process (signal 9), if separate process is doing long-time read;

    • abort or restart offending read transaction if it's running in sibling thread;

    • abort current write transaction with returning error code.

  10. Ability to open DB in exclusive mode with MDBX_EXCLUSIVE flag.

  11. Ability to get how far current read-only snapshot is from latest version of the DB by mdbx_txn_straggler().

  12. Ability to explicitly request update of present record without creating new record. Implemented as MDBX_CURRENT flag for mdbx_put().

  13. Fixed mdbx_cursor_count(), which returns correct count of duplicated for all table types and any cursor position.

  14. mdbx_env_info() to getting additional info, including number of the oldest snapshot of DB, which is used by one of the readers.

  15. mdbx_del() doesn't ignore additional argument (specifier) data for tables without duplicates (without flag MDBX_DUPSORT), if data is not null then always uses it to verify record, which is being deleted.

  16. Ability to open dbi-table with simultaneous setup of comparators for keys and values, via mdbx_dbi_open_ex().

  17. mdbx_is_dirty()to find out if key or value is on dirty page, that useful to avoid copy-out before updates.

  18. Correct update of current record in MDBX_CURRENT mode of mdbx_cursor_put(), including sorted duplicated.

  19. Check if there is a row with data after current cursor position via mdbx_cursor_eof().

  20. Additional error code MDBX_EMULTIVAL, which is returned by mdbx_put() and mdbx_replace() in case is ambiguous update or delete.

  21. Ability to get value by key and duplicates count by mdbx_get_ex().

  22. Functions mdbx_cursor_on_first() and mdbx_cursor_on_last(), which allows to know if cursor is currently on first or last position respectively.

  23. Automatic creation of synchronization points (flush changes to persistent storage) when changes reach set threshold (threshold can be set by mdbx_env_set_syncbytes()).

  24. Control over debugging and receiving of debugging messages via mdbx_setup_debug().

  25. Function mdbx_env_pgwalk() for page-walking all pages in DB.

  26. Three meta-pages instead of two, this allows to guarantee consistently update weak sync-points without risking to corrupt last steady sync-point.

  27. Guarantee of DB integrity in WRITEMAP+MAPSYNC mode:

Current libmdbx gives a choice of safe async-write mode (default) and UTTERLY_NOSYNC mode which may result in full DB corruption during system crash as with LMDB. For details see Data safety in async-write mode.

  1. Ability to close DB in "dirty" state (without data flush and creation of steady synchronization point) via mdbx_env_close_ex().

  2. If read transaction is aborted via mdbx_txn_abort() or mdbx_txn_reset() then DBI-handles, which were opened in it, aren't closed or deleted. This allows to avoid several types of hard-to-debug errors.

  3. All cursors in all read and write transactions can be reused by mdbx_cursor_renew() and MUST be freed explicitly.

Caution, please pay attention!

This is the only change of API, which changes semantics of cursor management and can lead to memory leaks on misuse. This is a needed change as it eliminates ambiguity which helps to avoid such errors as:

  • use-after-free;
  • double-free;
  • memory corruption and segfaults.

Gotchas

  1. At one moment there can be only one writer. But this allows to serialize writes and eliminate any possibility of conflict or logical errors during transaction rollback.

  2. No WAL means relatively big WAF (Write Amplification Factor). Because of this syncing data to disk might be quite resource intensive and be main performance bottleneck during intensive write workload.

As compromise libmdbx allows several modes of lazy and/or periodic syncing, including MAPASYNC mode, which modificate data in memory and asynchronously syncs data to disk, moment to sync is picked by OS.

Although this should be used with care, synchronous transactions in a DB with transaction journal will require 2 IOPS minimum (probably 3-4 in practice) because of filesystem overhead, overhead depends on filesystem, not on record count or record size. In libmdbx IOPS count will grow logarithmically depending on record count in DB (height of B+ tree) and will require at least 2 IOPS per transaction too.

  1. CoW for MVCC is done on memory page level with B+trees. Therefore altering data requires to copy about Olog(N) memory pages, which uses memory bandwidth and is main performance bottleneck in MAPASYNC mode.

This is unavoidable, but isn't that bad. Syncing data to disk requires much more similar operations which will be done by OS, therefore this is noticeable only if data sync to persistent storage is fully disabled. libmdbx allows to safely save data to persistent storage with minimal performance overhead. If there is no need to save data to persistent storage then it's much more preferable to use std::map.

  1. LMDB has a problem of long-time readers which degrades performance and bloats DB.

libmdbx addresses that, details below.

  1. LMDB is susceptible to DB corruption in WRITEMAP+MAPASYNC mode. libmdbx in WRITEMAP+MAPASYNC guarantees DB integrity and consistency of data.

Additionally there is an alternative: UTTERLY_NOSYNC mode. Details below.

Long-time read transactions problem

Garbage collection problem exists in all databases one way or another (e.g. VACUUM in PostgreSQL). But in libmdbx and LMDB it's even more important because of high performance and deliberate simplification of internals with emphasis on performance.

  • Altering data during long read operation may exhaust available space on persistent storage.

  • If available space is exhausted then any attempt to update data results in MAP_FULL error until long read operation ends.

  • Main examples of long readers is hot backup and debugging of client application which actively uses read transactions.

  • In LMDB this results in degraded performance of all operations of syncing data to persistent storage.

  • libmdbx has a mechanism which aborts such operations and LIFO RECLAIM mode which addresses performance degradation.

Read operations operate only over snapshot of DB which is consistent on the moment when read transaction started. This snapshot doesn't change throughout the transaction but this leads to inability to reclaim the pages until read transaction ends.

In LMDB this leads to a problem that memory pages, allocated for operations during long read, will be used for operations and won't be reclaimed until DB process terminates. In LMDB they are used in FIFO manner, which causes increased page count and less chance of cache hit during I/O. In other words: one long-time reader can impact performance of all database until it'll be reopened.

libmdbx addresses the problem, details below. Illustrations to this problem can be found in the presentation. There is also example of performance increase thanks to BBWC when LIFO RECLAIM enabled in libmdbx.

Data safety in async-write mode

In WRITEMAP+MAPSYNC mode dirty pages are written to persistent storage by kernel. This means that in case of application crash OS kernel will write all dirty data to disk and nothing will be lost. But in case of hardware malfunction or OS kernel fatal error only some dirty data might be synced to disk, and there is high probability that pages with metadata saved, will point to non-saved, hence non-existent, data pages. In such situation, DB is completely corrupted and can't be repaired even if there was full sync before the crash via `mdbx_env_sync().

libmdbx addresses this by fully reimplementing write path of data:

  • In WRITEMAP+MAPSYNC mode meta-data pages aren't updated in place, instead their shadow copies are used and their updates are synced after data is flushed to disk.

  • During transaction commit libmdbx marks synchronization points as steady or weak depending on how much synchronization needed between RAM and persistent storage, e.g. in WRITEMAP+MAPSYNC commited transactions are marked as weak, but during explicit data synchronization - as steady.

  • libmdbx maintains three separate meta-pages instead of two. This allows to commit transaction with steady or weak synchronization point without losing two previous synchronization points (one of them can be steady, and second - weak). This allows to order weak and steady synchronization points in any order without losing consistency in case of system crash.

  • During DB open libmdbx rollbacks to the last steady synchronization point, this guarantees database integrity.

For data safety pages which form database snapshot with steady synchronization point must not be updated until next steady synchronization point. So last steady synchronization point creates "long-time read" effect. The only difference that in case of memory exhaustion the problem will be immediately addressed by flushing changes to persistent storage and forming new steady synchronization point.

So in async-write mode libmdbx will always use new pages until memory is exhausted or mdbx_env_sync() is invoked. Total disk usage will be almost the same as in sync-write mode.

Current libmdbx gives a choice of safe async-write mode (default) and UTTERLY_NOSYNC mode which may result in full DB corruption during system crash as with LMDB.

Next version of libmdbx will create steady synchronization points automatically in async-write mode.


Performance comparison

All benchmarks were done by IOArena and multiple scripts runs on Lenovo Carbon-2 laptop, i7-4600U 2.1 GHz, 8 Gb RAM, SSD SAMSUNG MZNTD512HAGL-000L1 (DXT23L0Q) 512 Gb.


Integral performance

Here showed sum of performance metrics in 3 benchmarks:

  • Read/Search on 4 CPU cores machine;

  • Transactions with CRUD operations in sync-write mode (fdatasync is called after each transaction);

  • Transactions with CRUD operations in lazy-write mode (moment to sync data to persistent storage is decided by OS).

Reasons why asynchronous mode isn't benchmarked here:

  1. It doesn't make sense as it has to be done with DB engines, oriented for keeping data in memory e.g. Tarantool, Redis), etc.

  2. Performance gap is too high to compare in any meaningful way.

Comparison #1: Integral Performance


Read Scalability

Summary performance with concurrent read/search queries in 1-2-4-8 threads on 4 CPU cores machine.

Comparison #2: Read Scalability


Sync-write mode

  • Linear scale on left and dark rectangles mean arithmetic mean transactions per second;

  • Logarithmic scale on right is in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.

10,000 transactions in sync-write mode. In case of a crash all data is consistent and state is right after last successful transaction. fdatasync syscall is used after each write transaction in this mode.

In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 10,000 small key-value records.

Comparison #3: Sync-write mode


Lazy-write mode

  • Linear scale on left and dark rectangles mean arithmetic mean of thousands transactions per second;

  • Logarithmic scale on right in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.

100,000 transactions in lazy-write mode. In case of a crash all data is consistent and state is right after one of last transactions, but transactions after it will be lost. Other DB engines use WAL or transaction journal for that, which in turn depends on order of operations in journaled filesystem. libmdbx doesn't use WAL and hands I/O operations to filesystem and OS kernel (mmap).

In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 100,000 small key-value records.

Comparison #4: Lazy-write mode


Async-write mode

  • Linear scale on left and dark rectangles mean arithmetic mean of thousands transactions per second;

  • Logarithmic scale on right in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.

1,000,000 transactions in async-write mode. In case of a crash all data will be consistent and state will be right after one of last transactions, but lost transaction count is much higher than in lazy-write mode. All DB engines in this mode do as little writes as possible on persistent storage. libmdbx uses msync(MS_ASYNC) in this mode.

In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 10,000 small key-value records.

Comparison #5: Async-write mode


Cost comparison

Summary of used resources during lazy-write mode benchmarks:

  • Read and write IOPS;

  • Sum of user CPU time and sys CPU time;

  • Used space on persistent storage after the test and closed DB, but not waiting for the end of all internal housekeeping operations (LSM compactification, etc).

ForestDB is excluded because benchmark showed it's resource consumption for each resource (CPU, IOPS) much higher than other engines which prevents to meaningfully compare it with them.

All benchmark data is gathered by getrusage() syscall and by scanning data directory.

Comparison #6: Cost comparison


$ objdump -f -h -j .text libmdbx.so

libmdbx.so:     file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x000030e0

Sections:
Idx Name          Size      VMA               LMA               File off  Algn
 11 .text         00014d84  00000000000030e0  00000000000030e0  000030e0  2**4
                  CONTENTS, ALLOC, LOAD, READONLY, CODE

About

Really fast memory-mapped key-value storage engine

License:Other


Languages

Language:C 76.5%Language:C++ 20.6%Language:Makefile 0.9%Language:Roff 0.9%Language:CMake 0.7%Language:Shell 0.4%Language:Batchfile 0.0%