First understand that all of the SNARKs are arguments to an NP statement. it means they’ll help anyone to prove knowledge arguments on an NP statement.

Spartan, SNARK for R1CS circuit satisfiability which is a NP-complete language that generalises arithmetic satisfiability.

Properties:

  • transparent
  • can operate on any field
  • time-optimal prover?
  • sub-linear verification cost

Precursors to Spartan:

  • Bulletproofs
  • Hyrax
  • fractal
  • Supersonic
  • Aurora
  • STARK

What Spartan introduces:

  • computational commitment: succinct commitment to description of a computation.
  • SPARK: compiler to convert any extractable PCS for multilinear polys to sparse multilinear polys.
  • encoding of R1CS instance to a low degree poly.

How spartan uses all these three to create a sub-linear cost verifier for a SNARK?

Spartan is based on R1CS frontend for arithmetic circuits, i.e. it wants to prove arbitrary R1CS circuits. Now, instead of taking the approach of Groth16, which is pairing based, it takes the route of PCS. It introduces a succinct variation of the sumcheck protocol which forms it’s first layer of proof system.

“Our argument makes a black box use of an extractable polynomial commitment scheme in conjunction with an information-theoretic protocol, so its soundness holds under the assumptions needed by the polynomial commitment scheme (there exist many polynomial commitment schemes that can be instantiated under standard cryptographic assumptions).” - Spartan paper: page 3

Computational commitment

creating succinct commitment to description of NP statements are needed because otherwise verifier would have to process the NP statement being proven before verifying the proof.

Section 4: Encoding R1CS as LDP

section 4 describes a compact encoding of R1CS instances as degree-3 multivariate polynomial.

This function . To prove this, note that but this is not enough as verifer has to verify each term individually. For this, we can create another poly .

where is multilinear eq-polynomial. Note that, is a zero polynomial as it evaluates to 0 at all points in the domain because evaluates to zero.

Section 5:

creates NIZK for R1CS:

P makes three claims about evaluations of such that . P can summarise all three separate checks about values of A, B, C in one check:

Now, Prover just have to invoke sumcheck on , evaluates:

Note that: prover only needs to send since all other things can be calculated by prover using in time. But, simply sending evaluations of at requires time. To prevent this, Prover uses extractible PCS to commit to :

Invokes following NIZK:

  • P costs:
    • 2 sumcheck instances:
    • 1 Pc.commit, PC.eval for logm multilinear polynomial
  • V costs:
    • 2 sumcheck instance verification:
    • evaluate :
  • communication:
    • sumcheck:
    • size of commitment for
    • evaluation of commitment

his is because the verifier incurs costs linear in the size of the R1CS instance to evaluate eA, eB, eC at (rx, ry).

I don’t think i understand this completely. How is this cost linear in circuit size?

This is not a snark because verifier costs are not sublinear in n, convert this to sublinear by making prover commit to multilinear polynomials and evaluating at random point .

But the caveat is, current PC are even worse because PC.eval of all three PCS described in paper are .

Section 7: SPARK

Read from here. OR here

Offline Memory Checking

Offline memory checking is a method that enables a prover to demonstrate to a verifier that a read/write memory was used correctly. In such a memory system, values can be written to addresses and subsequently retrieved. This technique allows the verifier to efficiently confirm that the prover adhered to the memory’s rules (i.e., that the value returned by any read operation is indeed the most recent value that was written to that memory cell).

Jolt utilizes offline memory checking in the Bytecode prover, Lookup prover (for VM instruction execution), and RAM prover. The RAM prover must support a read-write memory. The Bytecode prover and Lookup prover need only support read-only memories. In the case of the Bytecode prover, the memory is initialized to contain the Bytecode of the RISC-V program, and the memory is never modified (it’s read-only). And the lookup tables used for VM instruction execution are determined entirely by the RISC-V instruction set.

(The term “offline memory checking” refers to techniques that check the correctness of all read operations “all at once”, after the reads have all occurred—or in SNARK settings, after the purported values returned by the reads have been committed. Off-line checking techniques do not determine as a read happens whether or not it was correct. They only ascertain, when all the reads are checked at once, whether or not all of the reads were correct.

This is in contrast to “online memory checking” techniques like Merkle hashing that immediately confirm that a memory read was done correctly by insisting that each read includes an authentication path. Merkle hashing is much more expensive on a per-read basis for SNARK provers, and offline memory checking suffices for SNARK design. This is why Lasso and Jolt use offline memory checking techniques rather than online).

Initialization Algorithm

TODO:

  • Initialize four timestamp counters
  • Implicitly assume a read operation before each write

Multiset Check

Define and as subsets, and and as subsets:

Here, , , and represent the address, value, and timestamp respectively, with being the total number of memory operations and the size of the RAM.

The verifier checks that the combination of and matches and , disregarding the sequence of elements, known as a permutation check:

Jolt conducts this check using a homomorphic hash function applied to each set:

The hash function is defined as:

This multiset hashing process is represented by a binary tree of multiplication gates and is computed using an optimized GKR protocol.

Lasso

References