LNP-BP / client_side_validation

Standard implementation of client-side-validation APIs

Home Page:https://docs.rs/client_side_validation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Increase multi-protocol commitment tree size limit

dr-orlovsky opened this issue · comments

Currently, MPC trees (LNPBP-4) are limited to a maximum depth of 2^4 = 16, i.e. they may contain up to 2^16=65536 elements. The original design assumed that's enough to host all assets which may be allocated to a single UTXO.

However, tests do show that even 127 different assets may not UNIQUELY fit into a tree with width=655536 (the position of the asset is 256-bit asset id modulo size of the tree) and practically we can assume just around 64 assets to be assignable to the same UTXO.

Here I propose extending MPC tree depth to 32 and tree width to 2^32 such that the tree may host up to 16000 separate assets (no of assets < 2^(width / 2 - 2))

With depth of 64 we will have 2^64 tree width and can safely have up to 1 billion of assets on the same UTXO (however the inclusion proofs for an asset may grow up to 64 * 256 = 16kB).

An alternative approach (or the one which may be combined with the proposed one) to achieve better packing efficiency if we allow non-power-2 tree width.

Is this a consensus-breaking change?

If we're going to extend it, would it be okay to make it 64? The reason I ask is for futureproofing purposes; one of the advantages of RGB is its capability to scale. The alternative of course is to have multiple UTXOs, but ideally this complexity isn't necessary to scale.

I can see assets getting into the billions one day if assets from other chains are bridgeburnt to RGB contracts often. There are likely applications we can't imagine, but billions offers a great deal of flexibility.

It is a consensus-breaking change, but we already had one in the required bugfix in #129 so I'd like to use the opportunity and make other improvements (like this one) as well.

I was also thinking about 64 bits - the only drawback is that it may worsen work with hardware and low-end devices. A tree of 64 depth will have 2^64+2^32+..., i.e. ~18 quintillions of nodes, which must be computed at state transition time. Sounds like a performance & scalability killer.

So I am also thinking about improving the "fitness" of the protocols into smaller-size trees by introducing a co-factor. I expect this approach will allow us to fit into a 16-bit depth tree up to 10000 assets - and up to a million into a 24-bit depth tree. Writing a PR demonstrating that.

So, testing results.

With the current approach (no cofactor, tree depth limit is 16) we can fit up to 48 contracts/assets:

Depth: avg=11.20 [12, 10, 12, 11, 11, 9, 13, 12, 10, 12]

Once we go to 50, at least in 10% of chances we can't fit them into a 16-level deep tree (i.e. 50 256-bit random numbers with 10% probability will not have a common denominator in the set of {2, 4, 8, 16, 32, 64 ... 2^16 }).

Once I introduce a cofactor in #131 I am able to fit 500 contracts/protocols in the same 16-level deep tree:

Depth: avg=14.90 [15, 15, 14, 15, 15, 15, 15, 15, 15, 15]
Cofactors: [2, 1, 143, 30, 56, 5, 66, 62, 49, 15]

I think with #132 we now have what we need and 32 bits "should be enough for everyone" (C)