AleoNet / snarkVM

A Virtual Machine for Zero-Knowledge Executions

Home Page:https://snarkvm.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Price in time-intensive finalize scopes

howardwu opened this issue · comments

A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.

@vicsn We need to update snarkVM pricing for these cases. Any execution that takes more than 1 second needs to be priced in accordingly to prevent this exact case.

Originally posted by @howardwu in https://github.com/AleoHQ/snarkVM/pull/2368#discussion_r1510085957

A runtime benchmark table for opcodes + pricing table is in the works.

EDIT: still syncing with Mike on this.

Hi @pranav and @howard, after syncing with Mike today, we believe we need the following adjustments, let us know if you agree with the direction:
1. Set BLOCK_SPEND_LIMIT to a number of credits equal to the price of a finalize block of 7.5 seconds.
- Suggestion: Mike will still continue analysis on this, but based on our current model it is likely above 100 credits.
2. Limit number of bytes hashed in finalize blocks as defense in depth.
- Suggestion: 4 million bytes, as BHP hashing 4 million bytes takes 2 seconds in Mike's benchmarks.
3. Set a TRANSACTION_SPEND_LIMIT, for UX reasons, so a single transaction can't eat up all the finalize space.
- Suggestion: BLOCK_SPEND_LIMIT / 100.

Basically as priced currently in the SnarkVM code. Opcodes are currently priced such that 1 second of runtime costs 100 credits. This was future proofing to ensure that in the future, people can reasonably afford to use hash functions. However governance will be able to set future prices to ensure network usability, so at the outset hash function prices can be higher as they're most abuse-able opcode.

Raising the per byte cost reasonably prices out using high byte inputs like nested arrays but it still leaves open the ability for people to use spend much less money to extended the block time using a LOT of hashes on smaller byte inputs like field elements. Limiting the # of hashes per execution doesn't solve this as people could send many executions at the maximum limit and achieve the same result.

What would be sensible is in my opinion

  1. Increase the base price of all the hash opcodes 10x (at least) at network outset so users trying to flood many hash executions reach the BLOCK_SPEND_LIMIT much faster before they can slow down block times. Also it would discourage people attempting the attack (as well as just excessive hash usage from poor finalize scope design) because of how expensive it is. I'm convinced this is a measure that should be taken.

  2. A TRANSACTION_SPEND_LIMIT or MAX_BYTES_HASHED per executions, would implicitly limit single programs from using a lot of hashes, which would be helpful, but it wouldn't limit the flood case.

  3. Potentially putting a hash limit per block or limit the # of total bytes hashed per block. This is the weirder option, but it would definitively stop validators from trying to process an amount of hash function usage that slows the network.

commented

On #2, does the BLOCK_SPEND_LIMIT address the flood case?
The total spent in a block is the sum of the cost of the accepted and rejected executions.
If an execution's cost results in the running total exceeded BLOCK_SPEND_LIMIT then it's aborted.

@d0cd the conclusion I've come to is that a sufficiently low BLOCK_SPEND_LIMIT somewhere around 750 as you've suggeested and an increase of them current hash opcode prices 20x should be sufficient to prevent DoS attacks via Hash opcodes (and also has the benefit of being the simplest option).

I'd recommend we go with this route.