Simplifying constraint composition polynomial
Al-Kindi-0 opened this issue · comments
In Winterfell, the constraint composition polynomial is constructed using the following formula
where:
-
$(\alpha_i, \beta_i)$ is a per-constraint random tuple. -
$d_i$ is what is called the adjustment factor. This is needed in order to make each of the terms in the above expression have the same degree. -
$C_i$ is the i-th constraint. -
$Z_{H_i}$ is the vanishing polynomial on the sub-group$H_i$ where the constraint$C_i$ should hold.
In fact, in Winterfell the constraints are grouped into groups that share the same
The use of degree adjustments made sense in earlier versions of the protocol i.e. the original ALI protocol. Moreover, and as noticed here in page 16, we can compute the constraint composition polynomial as
For all sensible choices of security parameters, choosing
Related to this is how d:= max_degree - 1
many polynomials. In page 15, the following decomposition is proposed
where
This decomposition has the advantage of giving a slightly better soundness error as well as avoiding having quotients with denominator
The following issue aims at discussing the merits of introducing such changes in the Winterfell setting as well as the best approach to implement them if we decide to go this route.
This is very interesting! If we implement this, this would simplify the code base quite a bit.
I do have a question though: let's say we have a set of constraints where some constraints are of degree 2 and others are of degree 8. Currently, when building
Under the proposed approach, there would be no normalization and thus,
Now, let's say we take the above situation, and remove all constraints of degree 8. So, we have
In general, the protocol will show that
The answer is the effect is going to be at the IOP protocol in a manner similar to how the degree of a polynomial affects the soundness of the Schwartz-Zippel lemma. More precisely, the soundness error of the protocol is less than
where:
-
$H$ is the trace domain of size$k$ -
$D$ is the LDE domain of size$n$ $k^+ := k + 2$ -
$L^+ := \frac{m+0.5}{\sqrt{\rho^+}}$ where$m\geq 3$ is the Johnson proximity parameter which for simiplicity can be taken to be equal to$3$ and$\rho^+ := \frac{k^+}{n}$ -
$\mathbb{F}$ is the extension field. -
$\epsilon_{FRI}$ is the soundness error bound for FRI run with proximity parameter$\theta^+ := 1 - \alpha^+ := 1 - (1 + \frac{1}{2m})\cdot \sqrt{\rho^+}$ -
$C$ is set to$1$ in the case one uses different randomness for batching the different constraints else it is set to the number of constraints.
So as can be seen, the soundness error bound is a quotient of the maximal degree (i.e. complexity) of the constraints and the size of the field with the adjustment through
I am wondering about the further costs, if any, if we avoid dividing by the
where
I think this change simplifies things and makes potential optimizations (e.g. common sub-expression elimination) easier.
Actually, this would be more costly as one would need to compute Lagrange selector polynomials with quiet large powers. I wonder, however, if we could precompute such quantities once and for all.
Closed by #198.