davxy / graypaper

The JAM Specification

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Graypaper: The JAM Specification

The description and formal specification of the Jam protocol, a potential successor to the Polkadot Relay chain.

Build with xelatex.

https://graypaper.com/

Remaining near-term

Finesse

  • Make all subscript names capitalized
  • Ensure all definitions are referenced
  • Link and integrate to Bandersnatch RingVRF references (Davide/Syed) IN-PROGRESS
  • Remove any "TODOs" in text
  • Macrofy everything
  • Limit number of extrinsics in a WP.

Final PVM

  • 64-bit PVM
  • Gas pricing
    • Merkle reads in terms of nodes traversed.
    • Non-linear gas for export/import host calls
  • No pages mappable in first 64 KB

Final DA

  • Migrate formalization & explanation:
    • guaranteeing-specific stuff into relevant section
    • assurance-specific stuff into relevant section
    • auditing-specific stuff into relevant section

Guaranteeing & Auditing

  • Specify announcement signatures
  • Specify how to build perspective on other validators with announcements

Discussion and Conclusions/Further Work

  • Security assumptions: redirect to ELVES paper
  • Creating a parachains service: further work (RFC for upgrade perhaps)
    • Key differences
      • limited size of Work Output vs unlimited candidate receipt
      • Laissez-faire on Work Items vs requirement for valid transition
      • Hermit relay (staking &c is on system chains)
    • Supporting liveness
    • Supporting *MP
    • No need for UMP/DMP
  • Compare with danksharding v1
  • Deeper talk Cost & latency comparison with RISC0-VM and latest ZK stuff.
  • Include full calculations for bandwidth requirements.

Stuff before 1.0

Final networking protocol

  • Consider a simple network protocol needed for M1/M2 and a production protocol for M3+
  • Block distribution via EC and proactive-chunk-redistribution
  • Guarantor-guarantor handover
  • Star-shaped Point-to-point extrinsic distribution
  • Mixnet for ticket submission

Bring together sub-protocols

  • Better integration to Grandpa paper
  • Better description of Beefy
  • Better integration to Bandersnatch RingVRF.

Ideas to consider

Work Packages

At present all WorkItems can succeed or fail independently. Instead we should be able to specify co-dependency criteria, so that if one fails, both fail. This should be respected through to accumulation, whereby an accumulator can commit to accumulating the WorkResult iff there is a signal from the other accumulator that the result has been accumulated there.

Statistics/Bookkeeping

  • Consider integrating the subjective extrinsic and state:
    • If so, have three items to allow for a whole epoch of opinion submission
    • In which case allow for guaranteeing val keys from last epoch to gain points

General

  • Think about time and relationship between lookup-anchor block and import/export period.
    • Lookup anchor: maybe it should be 48 hours since lookup anchor can already be up to 24 hours after reporting and we want something available up to 24 hours after that?
  • Refine arguments:
    • Currently passing in the WP hash, some WP fields and all manifest preimages: Consider passing in the whole work-package and a work-item index.
  • Consider removal of the arrow-above notation in favour of subscript and ellipsis (this only works for the right-arrow).
  • Optional on_report entry point
  • Make memo bounded, rather than fixed.

Done

  • Statistics/Bookkeeping
    • Integrate into intro and definitions.
  • All "where" and "let" lines are unnumbered/integrated
  • DA2
    • Update chunks/segments to new size of 12 bytes / 4KB in the availability sections, especially the work-packages and work-reports section and appendix H.
    • export is in multiples of 4096 bytes.
    • Manifest specifies WI (maximum) export count.
    • import is provided as concatenated segments of 4096 bytes, as per manifest.
    • Constant-depth merkle root
    • (Partial) Merkle proof generation function
    • New erasure root (4 items per validator; 2 hashes + 2 roots).
    • Specification of import hash (to include concatenated import data and proof).
      • Proof spec.
      • Specification of segment root.
    • Additional two segment-roots in WR.
      • Specification of segment tree.
    • Specification of segment proofs.
    • Specification of final segments for DA and ER.
    • Re-erasure-code imports.
    • Fetching imports and verification.
  • Independent definition of PVM.
  • Need to translate the basic work result into an "L"; do it in the appendix to ease layout
    • service - easy
    • service code hash - easy
    • payload hash - easy
    • gas prioritization - just from WP?
  • Edit Previous Work.
  • Edit Discussion.
  • Document guide at beginning.
  • Move constants to appendix and define at first use.
  • Context strings for all signatures.
    • List of all context strings in definitions.
  • Remove header items from ST dependency graph where possible.
  • Update serialization
    • For $\beta$ component $b$ - implement MMR encode.
    • Additional field: $\rho_g$
  • Link and integrate to RISCV references (Jan) HAVE SPEC
  • Link and integrate to Beefy signing spec (Syed)
  • Link and integrate to Erasure-Coding references (work with Al)
  • Grandpa/best-block: Disregard blocks which we believe are equivocated unless finalized.
  • Other PVM work
    • Define sbrk properly:
    • Update host functions to agreed API.
    • Figure out what to do with the jump table.
  • Define inner PVM host-calls
    • Spec below
    • Figure out what the $c_i$/$c_b$ are
    • Avoid entry point
    • Ensure code and jump-table is amalgamated down to VM-spec
    • Move host calls to use register index
  • Update serialization for Judgement extrinsic and judgments state.
  • Define Beefy process
    • Accumulate: should return Beefy service hash
    • Define Keccak hash $\mathbb{H}_K$
    • Remove Beefy root from header
    • Put the Beefy root into recent blocks after computation
    • Recent blocks should store MMR of roots of tree of accumulated service hashes
    • Define an MMR
    • Add \textsc{bls} public key to keyset (48 octet).
    • Specify requirement of validators to sign.
  • Define audit process.
    • Erasure coding root must be correct
    • This means we cannot assume that the WP hash can be inverted.
    • Instead, we assume that we can collect 1/3 chunks and combine to produce some data
    • Then we check:
      • if that hashes to the WP hash.
      • if the erasure-coded chunks merklise into a tree of the given root.
    • If so we continue.
    • NOTE: The above should be done in guarantor stage also.
  • Auditing: Always finish once announced.
  • Judgments: Should cancel work-report from Rho prior to accumulation.
  • Signed judgments should not include guarantor keys;
    • Judgement extrinsic should use from rho.
  • Check "Which History" section and ensure it mentions possibility for reversion via judgment.
    • No reversion beyond finalized
    • Of unfinalized extension, not block containing work-reports which appear in the banned-set of any other (valid) block.
  • Prior work and refine/remove the zk argumentation (work with Al)
  • Disputes state transitioning and extrinsic (work with Al)
  • Finish Merklization description
  • Bibliography
  • Updated PVM
  • Remove extrinsic segment root. Rename "* segment-root" to just "segment-root".
  • Combine chunk-root for WP, concatenated extrinsics and concatenated imports.
  • Imports are host-call
  • Make work-report field r bold.
  • Segmented DA v2
    • Underlying EC doesn't change, need to make clear segments are just a double-EC
  • Make work-report field r bold.
  • Need to translate the basic work result into an "L"; do it in the appendix to ease layout
    • service - easy
    • service code hash - easy
    • payload hash - easy
    • gas prioritization - just from WP?
    • Consider introducing a host-call for reading manifest data rather than always passing it in.
  • Guarantees by validator indices

ELVES

  • Don't immediately alter kappa mid-epoch as it affects off-chain judgments.
  • Instead, apply the blacklist to guarantor sig verification directly.
  • Include the core in the WR.
  • Deposit the WR's signatures in with the judgment.
  • Require at >= 1 negative judgments to be included with a positive verdict, and place signer in offender keys.
  • Serialization of judgment stuff
  • Should always be possible to submit more guarantee signatures of a known-bad report to place them in the offender set.
    • Only from lambda and kappa?
  • Use posterior kappa for everything except judgments.

% A set of independent, sequential, asynchronously interacting 32-octet state machines each of whose transitions lasts around 2 seconds of webassembly computation if a predetermined and fixed program and whose transition arguments are 5 MB. While well-suited to the verification of substrate blockchains, it is otherwise quite limiting.

Final DA

  • Include an epochal on-chain lookup from Work Package hash to segments root.
    • Allow import spec to be given as either WP hash (in case of being recent) or segments root.
    • How can this work given that WP hash to segments-root needs to happen in-core?
    • Include a bond to any SR->WPh lookups in the WR.
    • Check these bonds only just prior to accumulation; if they fail, drop the report and reduce the guarantors' points.
    • Update on-chain SR->WPh immediately at the time of being reported (don't want for accumulation).
  • Define Erasure Coding proof means
    • Define binary Merkle proof-generation function which compiles neighbours down to leaf.
    • Define binary Merkle proof-verification function exists sequence of values which contains our value and Merklised to some root.

About

The JAM Specification


Languages

Language:TeX 99.7%Language:Shell 0.3%