Note that most of these notes are taken from incredibly accessible resources. They’re written by actual mathematicians and cryptographers, so please check those out.

FRI is a polynomial commitment scheme based on hashing, and can be combined with any PIOP to create a SNARK. Basic idea is to prove that a vector commitment is close to a Reed-Solomon codeword using low-degree testing.

Let’s first explain what the full form of FRI is:

  • Fast: corresponds to use Fast Fourier Transform algorithm used in folding property of FRI.
  • Reed-Solomon: converts a polynomial evaluations at roots of unity to a reed solomon codeword.
  • IOPP: Interactive Oracle Proofs of Proximity
    • Interactive: soundness is proved in interactivity between Prover and Verifier. TODO
    • Oracle Proofs: Prover behaves like an oracle answering queries of the verifier at random points.
    • Proximity: verifier rejects any codewords that are far from original codeword.

Pros: post quantum secure, transparent (based on hashing), doesn’t require cryptographic groups, thus, can work with smaller fields (mersenne or babybear or goldilocks). Later, you’ll see that FRI does work with groups due to polynomials being evaluated at roots of unity but that’s just for performance.

Cons: proof size is large (order of 10s of KBs)

so, let’s start with a very naïve polynomial commitment scheme:

  • you have a polynomial of degree d on a field with size
  • evaluate polynomial evaluation on every field element
  • encode in a merkle tree, prove merkle path for every leaf

cons: field size is too large to encode in a tree, prover time in equal to field size, want to reduce that to linear time in polynomial degree

improvement: take a specific group of size for some constant : which is group formed by root of unity, evaluate on that, and encode in a tree. is termed as FRI blowup factor.

Now, one other transformation: we do not encode direct polynomial evaluations but rather its reed solomon encoding with rate .

can be tweaked accordingly to tradeoff between prover time and verifier time. as more the , more the prover cost as evaluations increase. Proof length of FRI-PCS is where is the proof length of one evaluation query from verifier and number of queries = .

  • is the security parameter of the protocol, i.e. bits of security.

Now, the second problem is verifier doesn’t know if prover really committed to a degree polynomial. This is called FRI low degree test. But naïve way isn’t really ideal as proof size for each query will be hash values of merkle tree path. Instead, low degree test will be interactive:

  • folding phase: rounds, to reduce the vector evaluations of length to , thus making the reduced vector evaluations of a degree 0 polynomial.
  • query phase: 1 round, because prover might not fold a length k vector evaluations and send artificial folded vector.

Folding phase

let’s take all evaluations of size: the polynomial, we’ll use folding phase to reduce the evaluations vector to a degree 0 polynomial.

  • take a vector of length and randomly combine the vector such that resultant vector has half length.
  • create a merkle tree where the evaluations are leaves of the tree. send tree root as commitment.
  • get random value from
  • get a random , where k is the current round value. do colinearity test and send values of where:
  • repeat step until vector length reaches , i.e. rounds. this means that degree of polynomial from this vector is 0

take use of FFT equation to divide polynomial into two polynomials of equal and half length:

Now, let’s combine and in a random linear combination, where is sent by verifier to get half length poly: . evaluating at and , both being roots of unity and , expanding above expression with the definitions of and , we get:

Note: q takes values in where, is nth root of unity. Thus , so The above expression takes same values as initial expression of for and thus can be used to combine two poly into one.


Query Phase

  • take each folded vector from each step of folding procedure and demand entries.
  • this happens for rounds
  • and each entry is accompanied with merkle verification path, i.e. hash evaluations.

Total proof length and time:

  • Difference between PC based on linear correcting codes and FRI?
    • both are based on error correcting codes
    • used cryptographic procedure is merkle hashing + fiat-shamir (non-interactivity)
    • In linear codes one big folding for poly of degree is done to reduce proof size to and thus are field agnostic while In FRI, proofs are recursively reduced in half logarithmically.


Above image by Alan Szepieniec’s series anatomy of a stark proof: part 3 very clearly explains the different rounds of interaction between and .

There is another very accessible diagram summarising FRI protocol by Hector.


 1. take out all roots and generate alpha for all codewords
 2. take out last codeword, check if last root equals last codeword merkle commitment
 3. evaluate on last_omega to get evaluation
 4. interpolate on domain to get last poly
 5. check if last poly == last codeword
 6. sample indices for colinearity test
 7. for each codeword, do:
 8. start each colinearity test:
 9. take out ay, by, cy
 10. create ax, bx, cx, and check if (ax, ay), (bx, by), (cx, cy) lie on same line
 11. verify merkle authentication paths

Security Analysis

- cheating polynomial ; - honest polynomial

- relative hamming distance b/w and , unique decoding radius of reed solomon code

passes all verifier queries with probability:

  • interpretation of probability
    • denotes adversarial prover creating a poly which has degree 0 in last folding round but starting with a poly with degree less than honest degree . If prover has a incorrect poly which is far of degree , random linear combinations of two functions, one of which must also be far from degree poly.
    • denotes prover passing all t queries when point doesn’t equal distant points.

Each verifier query is conjectured to provide bits of security but in practical scenarios, only provides half of that.

Known attacks on FRI

let honest polynomial be with degree .

Prover picks a set with size and computes a polynomial of degree , i.e. same degree of original polynomial. then folds rather than during FRI folding phase

verifier will accept proof if all verifier queries lie within with probability

PCS from FRI

Problem with FRI is following two things:

  1. P merkle commit q to nth roots of unity in field but not the complete field. but verifier can send random point in the complete field, and thus, prover won’t be able to open the polynomial at that point. even if it opens, the commitment doesn’t have the property to prove from that random point.
  2. V after low degree test, only knows that q is not too far from degree k-1 honest polynomial, but doesn’t know if it’s exactly low degree.

How a PCS is described? create a quotient polynomial similar to done in KZG, and apply fold+query procedure on that quotient poly of degree k-1. So, to illustrate:

  1. take a degree poly
  2. get a random variable from V:
  3. compute degree poly:
  4. to pass V’s checks in the FRI procedure for , it has to equal with high probability where is the degree-d poly that is closest to .
  5. Also, reed-solomon decoding has unique decoding radius of , and thus each verifier query bring less than 1 bit of security.
    1. This makes FRI PCS bound to small set of low degree polynomials , rather than to a single one. This might violate binding property of PCS, but since probability is so low, and not proven formally, it suffices for SNARK security.

The above process can be repeated any number of times for opening a single polynomials at multiple points and proving validity of evaluations .

  • Prover computes where is minimal polynomial that interpolates multiple points.
  • Prove using FRI that is a polynomial of degree

Polynomial IOP from FRI

If the prover wants to prove that polynomials represent polynomial of degree , then it can do one FRI claim for polynomial which is a weighted non-linear sum of all the polynomials:

This combination polynomial proves that is a polynomial of where . A weighted sum is required so that polynomial evaluation don’t cancel each other in a field when codeword is created.

Quoting Alan’s FRI tutorial:

The intuition why this random nonlinear combination trick is secure is as follows. If all the polynomials  satisfy their proper degree bounds, then clearly  has degree less than  and the FRI protocol succeeds. However, if any one  has degree larger than , then with overwhelming probability over the randomly chosen  and , the degree of  will be larger than or equal to . As a result, FRI will fail.

Open questions

  • what is the need of the offset for coset-FRI?
  • what is the meaning of index folding? why folding phase is more secure when codewords are folded at particular index rather than random index at each step?
  • when simulating a PCS from FRI: anatomy tutorial creates a new polynomial , and then prove that . Why another polynomial g(X) is required? why specifically ?
    • Maybe because proving low-degree of a polynomial of 2th power is easier than some arbitrary number.