Lattice-based cryptography has emerged as a promising new candidate to build cryptographic primitives. It offers resilience against quantum attacks, enables fully homomorphic encryption, and relies on robust theoretical foundations. Zero-knowledge proofs (ZKPs) are an essential primitive for various privacy-preserving applications. For example, anonymous credentials, group signatures, and verifiable oblivious pseudorandom functions all require ZKPs. Currently, the majority of ZKP systems are based on elliptic curves, which are susceptible to attacks from quantum computers. This project presents the first implementation of Lantern, a state-of-the-art lattice-based ZKP system that can create compact proofs, which are a few dozen kilobytes large, for basic statements. We thoroughly explain the theory behind the scheme and give a full implementation in a Jupyter Notebook using SageMath to make Lantern more accessible to researchers. Our interactive implementation allows users to fully understand the scheme and its building blocks, providing a valuable resource to understand both ZKPs and lattice cryptography. Albeit not optimized for performance, this implementation allows us to construct a Module-LWE secret proof in 35s on a consumer laptop. Through our contributions, we aim to advance the understanding and practical utilization of lattice-based ZKP systems, particularly emphasizing accessibility for the broader research community.
In this tutorial, we are investigating a efficient zero-knowledge proof system from lattices that can be used to build post-quantum secure privacy-enhancing technologies. Lattice cryptography is a great candidate for building post-quantum applications due to its efficiency and strong theoretical security. For some primitives like fully homomorphic encryption, the only known constructions are based on lattices [CKK+17]. As many companies switch to the new lattice-based encryption and signature schemes proposed in the NIST post-quantum competition, we can expect lattice cryptography to become a new standard. Privacy-enhancing applications using zero-knowledge proofs in their protocols will therefore need to prove statements involving lattice keys.
A possible approach to proving statements about lattice keys are post-quantum secure hash-based proof systems. However, they turn out to be very inefficient, especially for more advanced proofs, mainly due to the requirements of the lattice hardness assumptions [Bos+20]. A more elegant and potentially efficient approach is to use lattice-based proof systems. Research in this direction has only recently made enough theoretical advancements to consider this approach in practice, leaving us a research gap with theoretically efficient schemes but no public implementations available.
In 2022, Lyubashevsky, Nguyen, and Plaçon [LNP22] introduced Lantern. It is an efficient algorithm for proving knowledge of MLWE secrets and other lattice statements, which are used in quantum-resistant schemes. Lantern's advantage over existing hash-based schemes like Aurora [Ben+19] lies in it's efficiency for proving lattice statements, since it uses lattice-based cryptography internally.
Here, we provide the first publically available implementation of Lantern. To explain the protocols in a more approachable way and to allow the reader to interactively tinker with the protocols. Lantern's more advanced protocols are split up into multiple smaller subprotocols that allow the reader to grasp the concepts step by step. For every step, we provide an explanation of the protocol and an interactive implementation. In the implementation, the user can see all the necessary parameters in a clear and comprehensible way, and change them to their liking.
While not optimized for efficiency, our implementation may serve as a benchmark for future implementations. Currently, it can prove the knowledge of an MLWE secret in 35 seconds, with a proof size of 29 KB. With our toolbox implementation, the user can also construct their own proofs and measure their performance.
We start the tutorial by introducing ZKPs in Section 1.1 and lattice cryptography in Section 1.2, focussing on the difficulties that arise in the lattice setting. Then, we discuss related work in Section 2. Afterward, we explain the core ideas of Lantern in Section 3. In Section 4 you can find our complete implementation of Lantern. Finally, we give some benchmarks in Section 5.
ZKPs are a versatile, two-party cryptographic primitive that allows a \textit{prover} to prove statements about a secret to a \textit{verifier} without revealing any information about the secret, except for the statement we want to prove. This property of only disclosing necessary information and keeping everything else secret makes this especially useful for building privacy-preserving protocols. For example, in anonymous credentials, we can use ZKPs to verify that someone is of age without disclosing their actual birth date or anything else about their identity.
ZKPs need to fulfill three properties in their statements FFS88:
To arrive at ZKPs, we first need to understand commitment schemes. A commitment scheme is a two-step protocol in which one party commits to a chosen value while keeping it secret from others and later reveals, or opens, the value. The other parties can then verify whether the value corresponds to the commitment. Every commitment scheme needs to provide two security properties:
A common primitive in cryptography are discrete logarithms. They operate over integers modulo a prime number $q$. Discrete logarithm cryptography is based on the assumption that given the integers $x, y$ and $q$, it is computationally infeasible to recover an $a$ from $x^a\equiv y\mod q$.
The discrete logarithm problem can be used to build a simple commitment scheme: The commitment to a secret $s\in \mathbb{Z}_q$ is computed as $t\equiv g^s$, with $g$ being a public group generator. To open the commitment, $s$ has to be revealed. While this commitment scheme computationally hides $s$, it is not a ZK statement since $s$ has to be revealed at the end of the protocol.
Schnorr proofs [Sch89] allow a prover $\mathcal{P}$ to prove knowledge of $s$ to a verifier $\mathcal{V}$ without revealing its value. The commitment step is performed like before, but the prover picks an additional value $w$ and sends it to the verifier. The verifier picks a random challenge $c$, sends it to the prover and only accepts the proof if the prover can provide $z\leftarrow y+cs$ such that $g^z=g^y\cdot g^{sc}$, as computing this value is only possible if $s$ is known. The following figure shows a Schnorr zero-knowledge proof. The prover $\mathcal{P}$ previously committed to $s$ using the value $t\equiv g^s$. The knowledge of $s$ is proven to the verifier $\mathcal{V}$ in three rounds. A question mark ? indicates rejection if the condition is not fulfilled.
$$ \begin{array}{ccc} {\cal P}(g, s) & & {\cal V}(g, t) \\ \hline \text{pick}\; y \overset{\tiny R}{\leftarrow} \mathbb{Z}_p,~~~ w \leftarrow g^y & \overset{w}{\longrightarrow} & \\ & \overset{c}{\longleftarrow} & \text{pick challenge}\; c \overset{\tiny R}{\leftarrow} \mathbb{Z}_q \\ z \leftarrow y + cs & \overset{z}{\longrightarrow} & g^z \stackrel{?}{=} w \cdot t^c \end{array} $$Lattice cryptography allows us to build cryptographic primitives using hard lattice problems. Cryptography built from lattices is secure even with the advent of quantum computers. Furthermore, we can only build some advanced applications like fully homomorphic encryption efficiently using lattice cryptography. Many lattice-based schemes are top competitors in new standardization efforts, most prominently in the NIST Post-Quantum standardization project. This gives rise to a need for ZKP systems that can efficiently proof statements about lattice secrets.
Our protocols build upon the well-known computational lattice problems Module-LWE (MLWE) and Module-SIS (MSIS) [LS15]. Both problems are defined over a polynomial ring $\mathcal{R}_q$. We use the polynomial ring $\mathcal{R}_q = \mathbb{Z}_q[X] /(X^d+1)$. To illustrate what we are dealing with, consider that a ring element $a$ can be represented using $d$ integer coefficients in $\mathbb{Z}_q$ like
$$ a = a_0 + a_1 X + a_2 X^2 + \dots + a_{d-1} X^{d-1} $$where $d$ is the degree of the ring and $q$ is the modulus prime. This is possible because the polynomial is reduced modulo $X^d+1$. The following matrices and vectors consist of such polynomials and addition and multiplication are computed according to polynomial arithmetic. We will use lower-case letters to denote elements in $\mathcal{R}_q$, bold lower-case letters for vectors in $\mathcal{R}_q$ and bold capital letters for matrices in $\mathcal{R}_q$.
Module-SIS: Given matrix $\textbf{A} \leftarrow \mathcal{R}_q^{n \times m}$, find a vector $\textbf{s} \in \mathcal{R}_q^m$ of small norm $(0 < ||\mathbf{s}|| \leq B)$ such that
$$ \textbf{As} = \textbf{0}. $$It's easy to see that solving the SIS problem is equivalent to solving a linear equation system where the matrix $\mathbf{A}$ gives the coefficients and the vector $\mathbf{s}$ the desired variables. The additional requirement that $||\mathbf{s}||$ needs to be small is crucial for the hardness of the problem because otherwise it could be easily solved using Gaussian elimination. This smallness requirement will follow us throughout our discussion and is the reason for many complications in lattice cryptography, in particular for building ZKPs.
Decision Module-LWE: Given matrix $\textbf{A} \leftarrow \mathcal{R}_q^{n \times m}$, error distribution $\mathcal{X}$ over $\mathcal{R}$, secret vector $\mathbf{s} \gets \mathcal{X}^m$ and error vector $\mathbf{e} \gets \mathcal{X}^n$, distinguish
$$ (\mathbf{A}, \mathbf{As} + \mathbf{e}) $$form uniformly random.
The LWE problem can be seen as a noisy equation system, where $\mathbf{A}$ gives the coefficients, the vector $\mathbf{s}$ the desired variables and the vector $\mathbf{e}$ represents added noise, which makes the problem hard. Our protocols need the decision variant of the problem as stated above.
Now that we have defined the hard problems, we can build cryptographic primitives from them. Using the MSIS problem, we can build a commitment scheme like this:
where $s \in \mathcal{R}_q$ and $||\mathbf{s}||$ is small.
For lattice-based zero-knowledge proofs, one must prove knowledge of a secret $\mathbf{s}$ in the lattice commitment
$$ \mathbf{A}\mathbf{s} = \mathbf{t} $$and crucially, because the MSIS problem is only secure if $||\mathbf{s}||$ is small, prove that $\mathbf{s}$ has a small norm. This extra step of having to prove that $||\mathbf{s}||$ is small turns out to be a major complication in lattice-based zero-knowledge proofs in comparison to their discrete logarithm-based counterparts. All the complexity of the following protocols comes from this small norm requirement in lattice cryptography.
The Lyubashevsky Identification Scheme [Lyu09] tries to adopt the Schnorr Proof to the lattice setting but only manages to give a reduced proof. Namely, it proves knowledge of an $\mathbf{\bar s} \in \mathcal{R}_q$ such that $\mathbf{A}\mathbf{\bar s} = c\mathbf{t}$, where $||\mathbf{\bar s}|| > ||\mathbf{s}||$:
$$ \begin{array}{ccc} {\cal P}(\mathbf{A}, \mathbf{s}) & & {\cal V}(\mathbf{A}, \mathbf{t}) \\ \hline \text{pick}\; \mathbf{y} \overset{\tiny R}{\leftarrow} D_{\mathfrak{s}}^{md},~~~ \mathbf{w} \leftarrow \mathbf{Ay} & \overset{\mathbf{w}}{\longrightarrow} & \\ & \overset{c}{\longleftarrow} & \text{pick challenge}\; c \overset{\tiny R}{\leftarrow} \mathcal{C} \\ \mathbf{z} \leftarrow \mathbf{y} + c \mathbf{s},~~~ \text{Rej}(\mathbf{z}, c \mathbf{s}, \mathfrak{s}) & \overset{\mathbf{z}}{\longrightarrow} & ||\mathbf{z}|| \stackrel{?}{\leq} B,~~~ \mathbf{w} \stackrel{?}{=} \mathbf{Az} - c \mathbf{t} \end{array} $$This protocol follows the same commit, challenge, response structure as the Schnorr Proof. Because of the small norm requirement of the MSIS problem, randomness $\mathbf{y}$ needs to be kept small and is therefore sampled from a Gaussian distribution rather than uniformly random. Notice that $\mathbf{y}$ is used to mask the secret $\mathbf{s}$. Now, because $\mathbf{y}$ is not uniformly random anymore, the distribution of $\mathbf{z}$ needs to be adjusted using rejection sampling (Rej), such that $\mathbf{z}$ does not reveal anything about the secret. The challenge $c$ also needs to be small to keep $\mathbf{z}$ small, so it has to be sampled from a specially crafted challenge space $\mathcal{C}$.
Such a reduced proof is enough for constructing fairly efficient basic protocols like signatures. The NIST post-quantum signature candidate Dilithium [DKL+18] for example uses a non-interactive version of this protocol. However, the fact that the norm of the extracted $\mathbf{\bar s}$ is noticeably larger than that of $\mathbf{s}$, along with the presence of the extra multiplicand $c$, makes these proofs awkward to use in many other situations. This very often results in the protocols employing these proofs being less efficient than necessary, or in not giving the resulting scheme the desired functionality. For example, consider a Regev-style [Reg09] lattice-based encryption scheme where $\mathbf{s}$ is the randomness (including the message) and $\mathbf{t}$ is the ciphertext. To decrypt, $\mathbf{t}$ must have a short pre-image. The reduced proof is not enough to guarantee that the ciphertext $\mathbf{t}$ can be decrypted because only $c \mathbf{t}$ is guaranteed to have a short pre-image, and not $\mathbf{t}$ (and $c$ is not known to the decryptor). As a consequence, the previously most efficient lattice-based verifiable encryption scheme [LN17] has the undesirable property that the expected decryption time is equal to the adversary's running time because the decryptor needs to essentially guess $c$. Other lattice-based constructions like group signature schemes (e.g. [LNPS21]) were required to select much larger parameters than needed to accommodate the presence of the multiplicand $c$ and the “slack” between the length of the known solution $\mathbf{s}$ and the solution $\mathbf{\bar s}$ that one can prove.
In Lantern [LNP22] [Ngu22] a new approach for proving a tight bound on $||\mathbf{s}||$ is introduced, which improves upon previous works in terms of simplicity and efficiency. This allows us to construct a complete lattice ZKP and instantiate more efficient protocols like verifiable encryption or group signature schemes.
There has been a lot of progress in post-quantum secure zero-knowledge proofs in recent years. Many new schemes have been proposed, which can be categorized into two major research directions: Lattice-based schemes, which build upon the hardness of lattice problems, and hash-based schemes, which are only based on the hardness of cryptographic hash functions. In the following table (taken from [Ngu22]) you can see a proof length comparison for proving knowledge of an MLWE secret, specifically proving knowledge of short $\mathbf{s}, \mathbf{e}$ satisfying $\mathbf{As} + \mathbf{e} = \mathbf{t}$, where $\mathbf{A} \in \mathcal{R}_{q}^{n \times m}$, $(n, m, d, q) = (16, 16, 64, \approx 2^{32})$, and $||(\mathbf{s}, \mathbf{e})|| \leq \sqrt{2048}$.
$$ \begin{array}{rllr} & \text{Lattice-based} & \text{Hash-based} & \text{Proof Size} \\ \hline 2017 & \text{Stern-type proofs [Ste93]} & & \text{3522 KB} \\ & & \text{Ligero [Ame+17]} & \text{157 KB} \\ 2019 & \text{Bootle et al. [BLS19]} & & \text{384 KB} \\ & & \text{Aurora [Ben+19]} & \text{72 KB} \\ 2020 & \text{Esgin et al. [ENS20]} & & \text{47 KB} \\ 2022 & \text{Lantern [LNP22]} & & \text{13 KB} \\ \end{array} $$The first lattice-based protocols for complete proofs used the combinatorial algorithm of Stern [Ste93] to prove that the $L_\infty$ norm is bounded by revealing a random permutation of $\mathbf{s}$. The main problem with these protocols was their high soundness error, meaning they had to be repeated around 200 times to make the error acceptably small $(2^{-128})$, which resulted in proofs of size more than 1MB [Lin+13]. Later, a combination of commitments and ZKPs of committed values to prove linear relations between the coefficients of $\mathbf{s}$ was used to prove a bound on its norm [BLS19]. These proofs were significantly improved through Chinese remainder theorem (CRT) slots, reducing the proof size by another order of magnitude [ENS20]. The CRT slots, however, require support for large message coefficients, meaning they need to use a more expensive commitment scheme like BDLOP [Bau+18b]. Lantern introduces a different approach to prove the smallness of $\mathbf{s}$, not based on CRT slots, allowing for a more efficient commitment scheme like Ajtai [Ajt96]. This and other improvements got the proof size down to 13KB, of which 8KB are just the "minimum" commitment (i.e. a commitment to just one element in $\mathcal{R}_q$) and its opening proof, showing that Lantern is quite close to optimal for any approach that uses lattice-based commitment schemes.
Hash-based protocols have in the past outperformed their lattice-based counterparts and have also seen much development, most notably the introduction of Ligero [Ame+17] and Aurora [Ben+19]. However, the lattice-based alternatives have now caught up in terms of proof size.
Comparing the schemes in terms of real-world runtime remains difficult because the given lattice-based schemes don't have any public implementations. Esgin et al. [ENS20] mention that they implemented their scheme with a Module-LWE secret proof runtime (prover + verifier) of 4ms. For the hash-based schemes, such a proof was to date only implemented with Aurora [Bos+20]. They manage to prove knowledge of a Module-LWE secret in 40s on a laptop, but for proofs in more advanced protocols like group signatures, they run out of memory even on large memory cloud machines with more than 1 TB of memory. So, while the given schemes provide concise proofs, it is not clear whether they are feasible in practice in terms of runtime and memory usage. This is why we implement the Lantern scheme and show that we can run it on a consumer laptop.
To construct a ZKP of a secret $\mathbf{s} \in \mathcal{R}_q$ in a lattice commitment like $\mathbf{A}\mathbf{s} = \mathbf{t}$, one also needs to prove that $||\mathbf{s}||$ is small, which, as explained previously, is required for the security of the protocol. Lantern introduces a new approach to prove that $||\mathbf{s}|| \leq B$, where $B \in \mathbb{Z}$ is some bound on the norm of $\mathbf{s}$.
Note that $||\mathbf{s}||$ is equal to the square of the inner product of $\mathbf{s}$ with itself $\langle \mathbf{s}, \mathbf{s} \rangle^2$. This means we need to prove the following relation:
$$ \langle \mathbf{s}, \mathbf{s} \rangle \leq B^2, \quad B \in \mathbb{Z} $$which is a quadratic relation over $\mathbb{Z}$. However, proving quadratic relations over $\mathbb{Z}$ is difficult when we have a proof system defined over $\mathcal{R}_q$. Lantern gives us a multi-step approach to get from a commitment opening proof like in the Lyubashevsky Identification Scheme [Lyu09] (discussed previously) to a proof of quadratic relations over $\mathbb{Z}$. This is done by making use of several algebraic transformations and a new efficient lattice commitment scheme. In the following sections, we show the individual steps required to get from a commitment scheme with proof of opening to quadratic relations over $\mathbb{Z}$. We first use the commitment scheme to prove linear relation over $\mathcal{R}_q$, then over $\mathbb{Z}_q$ and finally over $\mathbb{Z}$. Then we use the commitment scheme to prove quadratic relations over $\mathcal{R}_q$, then over $\mathbb{Z}_q$. To finally get quadratic relations over $\mathbb{Z}$, we combine linear relations over $\mathbb{Z}$ with quadratic relations over $\mathbb{Z}_q$.
For more efficiency, Lantern introduces a new commitment scheme that combines the advantages of the Ajtai [Ajt96] and BDLOP [BDL+18] schemes into one they call ABDLOP.
In the Ajtai commitment scheme, one commits to a message $\mathbf{s}_1$ using randomness $\mathbf{s}_2$, where both $||\mathbf{s}_i||$ are small and
$$ \mathbf{A}_1\mathbf{s}_1 + \mathbf{A}_2\mathbf{s}_2 = \mathbf{t}. $$The advantage of the Ajtai commitment is that the dimension of the message $\mathbf{s}_1$ does not increase the commitment size. The disadvantage is that the coefficients of $\mathbf{s}_1$ need to be small.
In the BDLOP commitment scheme, one commits to a message $\mathbf{m}$ using randomness $\mathbf{s}$ as
$$ {\begin{bmatrix}{\mathbf{A}} \\ {\mathbf{B}}\end{bmatrix}} * \mathbf{s} + {\begin{bmatrix}{\mathbf{0}} \\ {\mathbf{m}}\end{bmatrix}} = {\begin{bmatrix}{\mathbf{t}_A} \\ {\mathbf{t}_B}\end{bmatrix}}, $$The disadvantage of this scheme is that both the commitment size and the opening size grow linearly with the dimension of message vector $\mathbf{m}$. The advantages are that first, the coefficients of $\mathbf{m}$ can be arbitrarily large modulo $q$. Second, the dimension of $\mathbf{s}$ is set to be large enough, appending new commitments is very cheap. For example, assume we have created a commitment to $\mathbf{m}$ and want to commit another $\mathbf{m}'$. The commitment can be computed as $\mathbf{B}'\mathbf{s} + \mathbf{m}' = \mathbf{t}_B'$, where $\mathbf{B}'$ is public randomness. The new commitment for $(\mathbf{m}, \mathbf{m}')$ is
$$ {\begin{bmatrix}{\mathbf{A}} \\ {\mathbf{B}} \\ {\mathbf{B}'}\end{bmatrix}} * \mathbf{s} + {\begin{bmatrix}{\mathbf{0}} \\ {\mathbf{m}} \\ {\mathbf{m}'}\end{bmatrix}} = {\begin{bmatrix}{\mathbf{t}_A} \\ {\mathbf{t}_B} \\ {\mathbf{t}_B'}\end{bmatrix}}. $$The new ABDLOP commitment scheme combines the two above in the following way: $\mathbf{s}_1$ is used for messages with small norm, and $\mathbf{m}$ is used for messages with unrestricted coefficients modulo $q$ as
$$ {\begin{bmatrix}{\mathbf{A}_1} \\ {\mathbf{0}}\end{bmatrix}} * \mathbf{s}_1 + {\begin{bmatrix}{\mathbf{A}_2} \\ {\mathbf{B}}\end{bmatrix}} * \mathbf{s}_2 + {\begin{bmatrix}{\mathbf{0}} \\ {\mathbf{m}}\end{bmatrix}} = {\begin{bmatrix}{\mathbf{t}_A} \\ {\mathbf{t}_B}\end{bmatrix}}, $$where $\mathbf{s}_2$ is randomness. Commitments and openings are shorter than if one would use the two schemes separately: Instead of needing $\mathbf{t}$ from Ajtai and $\mathbf{t}_A$ from BDLOP, we only need $\mathbf{t}_A$. Similarly, we only need $\mathbf{s}_2$ for the opening.
An opening proof in the ABDLOP commitment scheme can be constructed similar to the Lyubashevsky Identification Scheme discussed previously. For the sake of simplicity, we will now use the Ajtai commitment to show how to prove linear relations over $\mathcal{R}_q$, but the same can be done using ABDLOP commitments.
Assume we want to prove linear relations over the secret $\mathbf{s}_1 \in \mathcal{R}_q$ in the Ajtai commitment $\mathbf{A}_1\mathbf{s}_1 + \mathbf{A}_2\mathbf{s}_2 = \mathbf{t}$. We can represent these relations using matrix $\mathbf{R}$ and vector $\mathbf{r}$ like $\mathbf{R}\mathbf{s}_1 + \mathbf{r} = \mathbf{0}$.
To prove this relation, we simply extend the Ajtai matrices and vectors:
$$ {\begin{bmatrix}{\mathbf{A}_1} \\ {\mathbf{-R}}\end{bmatrix}} \mathbf{s}_1 + {\begin{bmatrix}{\mathbf{A}_2} \\ {\mathbf{0}}\end{bmatrix}} \mathbf{s}_2 = {\begin{bmatrix}{\mathbf{t}} \\ {\mathbf{r}}\end{bmatrix}} $$and run the opening proof protocol. This protocol now proves knowledge of the opening and the linear relation given.
Once we have linear relations over $\mathcal{R}_q$ we can use an algebraic equality to lift the proof and get linear relations over $\mathbb{Z}_q$.
Remember that every ${a \in \mathcal{R}_q}$ can be represented as a coefficient vector in $\mathbb{Z}_q$ of size $d$ such that
$$ {a = a_0 + a_1 X + a_2 X^2 + \dots + a_{d-1} X^{d-1}}. $$Notice that the inner product of the coefficient vectors of two polynomials
$$ u = {\langle r',s' \rangle} \quad r', s' \in \mathbb{Z}_q^d $$is equivalent to the constant coefficient of the polynomial product of the two polynomials
$$ u = \widetilde{(r*s)} $$where $r, s$ are constructed from the coefficient vectors like
$$ r = r'_0 - r'_{d-1} X - r'_{d-2} X^2 - \dots - r'_1 X^{d-1 } \in \mathcal{R}_q $$$$ s = s'_0 + s'_1 X + s'_2 X^2 + \dots + s'_{d-1} X^{d-1 } \in \mathcal{R}_q $$Notice that $s$ was not changed, but for $r$ the coefficients except the constant coefficients are negated and their order is reversed. This operation is an automorphism of the ring denoted as $\sigma_{-1}$, which is explained later.
Essentially, if we can prove linear relations of the constant coefficient of an $\mathcal{R}_q$ polynomial, we can also prove linear relations over $\mathbb{Z}_q$. Now, we can just take the proof protocol that proves linear relations over $\mathcal{R}_q$, mask all coefficients except the constant one, and we have a proof protocol that can prove linear relations over $\mathbb{Z}_q$.
Now that we can prove linear relation over $\mathbb{Z}_q$, we need to get rid of the modulus $q$. In particular, to prove $||\mathbf{s}||$ is small, we need to get from smallness modulo $q$ to smallness without a modulo. For this Lantern provides a theorem that builds on top of the Johnson-Lindenstrauss Lemma and states: For a matrix $\mathbf{C} \in \mathbb{Z}$ with $\{-1, 0, 1\}$ coefficients it holds that for $\mathbf{z} = \mathbf{y} + \mathbf{Cs}$
$$ \text{if} \; ||\mathbf{y} + \mathbf{Cs} \mod q|| \; \text{is small, then} \; \mathbf{z} \; \text{is small}. $$Unfortunately, this gives us only approximate shortness proofs. The norm of $\mathbf{z}$ is not exactly the norm of $\mathbf{s}$, because of the $\mathbf{y}$ and $C$ terms. However, we can now construct approximate proofs if we sample challenges from matrices with $\{-1, 0, 1\}$ coefficients. We can use these approximate proofs to prove that linear equations over $\mathbb{Z}_q$ hold also over $\mathbb{Z}$ by showing that no wrap-around modulo $q$ occurs.
Above we saw how one can prove relations of the secret with a public value. Now we show how to prove relations between two secrets, in other words, quadratic relations.
Assume we wanted to prove the following relation between secrets:
$$ s_1 s_2 - s_3 = 0 $$where $s_i$ are parts of the secret vector $\mathbf{s}$. Remember that in the opening proof, the prover sends the vector $\mathbf{z} = \mathbf{y} + c \mathbf{s}$, which we can split into $z_i = y_i + cs_i$, where $c$ is the challenge and $y_i$ is the masking polynomial used to hide $s_i$.
Now the verifier creates a quadratic equation in $c$ out of the linear equations $z_i = y_i + cs_i$:
$$ z_1 z_2 - c z_3 = (s_1 s_2 - s_3) c^2 + g_1 c + g_0 $$where $g_1, g_0$ are garbage terms that depend on $y_i, s_i$ and are committed to by the prover in the commitment phase (prior to receiving the challenge). If the prover shows that $z_1 z_2 - c z_3 = g_1 c + g_0$, which is a linear equation where the quadratic part of the above equation is left out, it implies that the quadratic part $s_1 s_2 - s_3 = 0$ with high probability, which is exactly the relation we wanted to prove originally.
Once we have quadratic relations of $\mathcal{R}_q$, we can apply the same technique as with linear proofs to get quadratic relations over $\mathbb{Z}_q$. To get quadratic relations over $\mathbb{Z}$, however, we need to combine proofs of quadratic relations over $\mathbb{Z}_q$ with proofs of linear relations over $\mathbb{Z}$.
Remember, we want to prove that $||\mathbf{s}||^2 = \langle \mathbf{s}, \mathbf{s}\rangle \leq B^2$.
We first prove that $\langle \mathbf{s}, \mathbf{s}\rangle \leq B^2 \mod q$, a quadratic relation over $\mathbb{Z}_q$. However, this only tells us that the norm is small modulo $q$. To prove that no wraparound occurred, we additionally prove the approximate smallness of the norm using the approximate range-proof technique seen previously, which uses linear relations over $\mathbb{Z}$.
Here we provide a Python/Sage implementation of the ZKP techniques of Lantern [LNP22] [Ngu22]. We start with preliminaries by defining the polynomial rings and some required algebraic functions, go over sampling techniques like Gaussian Sampling, provide the rejection sampling algorithms and algorithms for sampling the challenge polynomials. Then we provide ZKP implementations for various relations using Lantern techniques. As discussed in the previous section, we incrementally expand our proof system to be able to prove more kinds of relations. We start with simple commitment opening proofs and linear proofs over $\mathcal{R}_q$ and expand them to proofs over $\mathbb{Z_q}$. Then we add the ability to prove quadratic relations and from this finally build a proof system to prove knowledge of a Module-LWE secret. The code of the expanded proof systems will call previous proof systems as a sub-protocol.
# Make sure to select a SageMath kernel when running this notebook!
import sage.all as sg
import numpy as np
import random
import time
from itertools import chain
from sage.stats.distributions.discrete_gaussian_polynomial import DiscreteGaussianDistributionPolynomialSampler
We are using the following polynomial rings (primarily $\mathcal{R}_q$)
$\mathcal{R}=\mathbb{Z}[X] /\left(X^d+1\right)$
$\mathcal{R}_q=\mathbb{Z}_q[X] /\left(X^d+1\right)$
with the parameters provided in the code below.
kappa = 128 # security parameter
d = 128 # degree of polynomial ring
q = 2**32 - 99 # modulus prime
# Define polynomial rings using sage
IR = sg.IntegerRing()
P = sg.PolynomialRing(IR, 't')
t = P.gen()
R = sg.QuotientRing(P, t**d + 1, 'X')
X = R.gen()
IR_q = sg.IntegerModRing(q)
P_q = sg.PolynomialRing(IR_q, 't')
t_q = P_q.gen()
R_q = sg.QuotientRing(P_q, t_q**d + 1, 'X')
X_q = R_q.gen()
print(f"R: {R}")
print(f"R_q: {R_q}")
print(f"Example R_q polynomial demonstrating wraparound: {X_q**d - X_q}")
Remember, a ring element $a \in \mathcal{R}$ (resp. ${\mathcal{R}_q}$ ) can be represented using $d$ integer coefficients:
$$ a = a_0 + a_1 X + a_2 X^2 + \dots + a_{d-1} X^{d-1} $$We use a special modulo variant to convert polynomials to their coefficient vector representation and to define the norm of polynomial vectors: Define $r' = r \;\text{mod}^{\pm}\; q$ for odd integer $q$ to be the unique element in the range $−\frac{q - 1}{2} < r' \leq \frac{q - 1}{2}$ such that $r' = r \;\text{mod}\; q$.
Here, we define some convenience functions for converting a Sage polynomial to its coefficient representation and back.
def Zq_2_ZZ(r):
"""
Return r_p = r mod+- q (for odd interger q)
as the unique element in range [-((q-1)/2), (q-1)/2]
such that r_p = r mod q
"""
if r <= (q-1)//2:
return int(r)
else:
return int(r) - q
def Rql_2_ZZl(vec):
"""Convert polynomial vector to coefficient vector"""
return sg.vector(sg.ZZ, chain.from_iterable([[Zq_2_ZZ(coeff) for coeff in poly.list()] for poly in vec]))
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
def ZZl_2_Rql(vec):
"""Convert coefficient vector to polynomial vector"""
assert(len(vec) % d == 0)
return sg.vector(R_q, chunks(list(vec), d))
def stack_vec_Rql(vecs):
"""Stack polynomial sage vectors"""
return sg.vector(R_q, chain(*vecs))
def IM(n):
"""Identity matrix"""
return sg.identity_matrix(R_q, n)
def zv(n):
"""Zero vector"""
return sg.zero_vector(R_q, n)
def ZM(n, m):
"""Zero matrix"""
return sg.zero_matrix(R_q, n, m)
Next, we define the norms for polynomials and their vectors using our special modulo variant. This is the norm we mean when we say that secret $||\mathbf{s}||$ needs to have small norm. This norm stays small when we compute $\mathbf{z} \leftarrow \mathbf{y} + c \mathbf{s}$, where $c$ is the challenge, $\mathbf{s}$ the secret, and $\mathbf{y}$ the randomness.
Define the $\ell_p$ norm of a polynomial vector $\mathbf{w} \in \mathcal{R}^{\ell}$ as:
$$ \|\mathbf{w}\|_p =(\left\|w_1\right\|^p + \ldots + \left\|w_{\ell}\right\|^p)^{1/p} $$where the $\ell_p$ norm of a polynomial $w_i = w_{i, 0} + w_{i, 1}X + \ldots + w_{i, d-1}X^{d-1} \in \mathcal{R}$ is defined as:
$$ \|w_i\|_p = (\left\|w_{i, 0}\right\|_{\infty}^p + \ldots + \left\|w_{i, d-1}\right\|_{\infty}^p)^{1/p}. $$where the infinity norm of an element $w_{i, j} \in \mathbb{Z}_q$ is defined as:
$$ \left\|w_{i, j}\right\|_{\infty} = |w_{i, j}\mathrm{~mod}^{\pm}\,q|. $$By default $\|\mathbf{w}\| = \|\mathbf{w}\|_2$.
Define the infinity norm of a polynomial vector $\mathbf{w} \in \mathcal{R}^{\ell}$ as:
$$ \|\mathbf{w}\|_{\infty} = \max_j\left\|w_j\right\|_{\infty}\quad $$where the infinity norm of a polynomial $w \in \mathcal{R}$ is defined as:
$$ \left\|w\right\|_{\infty} = \max_j\left\|w_j\right\|_{\infty}. $$def inf_norm_Zq(x):
"""Compute the infitity norm of an integer mod q according to definition"""
if x <= (q-1)//2:
return int(x)
else:
return -(int(x) - q)
def p_norm_Rl(vec, p):
"""Compute p-norm of polynomial vector in R^l according to definition"""
return sum([p_norm_R(poly, p) for poly in vec])**(1/p)
def p_norm_R(poly, p):
"""Compute p-norm of a polynomial in R according to definition"""
return sum([int(coeff)**p for coeff in poly])**(1/p)
def norm_Rl(vec):
"""Compute Euclidean norm of polynomial vector in R^l"""
return p_norm_Rl(vec, 2)
def p_norm_Rq(poly, p):
"""Compute p-norm of a polynomial in R_q according to definition"""
return sum([inf_norm_Zq(coeff)**p for coeff in poly])**(1/p)
def p_norm_Rql(vec, p):
"""Compute p-norm of polynomial vector in R_q^l according to definition"""
return sum([p_norm_Rq(poly, p)**p for poly in vec])**(1/p)
def norm_Rql(vec):
"""Compute Euclidean norm of polynomial vector in R_q^l according to definition"""
return p_norm_Rql(vec, 2)
def norm_Rql_bound(l, max_coeff):
"""
Compute upper bound of Euclidean norm of polynomial vector in R_q^l
with only inf_norm_Zq <= max_coeff coefficients
"""
return max_coeff * sg.sqrt(d * l)
def inf_norm_Rl(vec):
"""Compute infinity norm of a polynomial vector in R^l"""
return max([max(poly) for poly in vec])
def inf_norm_Rql(vec):
"""Compute infinity norm of a polynomial vector in R_q^l"""
return max([max([inf_norm_Zq(coeff) for coeff in poly]) for poly in vec])
Define the dot product $\langle\mathbf{u}, \mathbf{v}\rangle \in \mathbb{Z}$ for polynomial vectors $\mathbf{u}, \mathbf{v} \in \mathcal{R}_q^{\ell}$ as:
$$ \langle\mathbf{u}, \mathbf{v}\rangle = \sum_{i=1}^{\ell} \langle u_1,v_1 \rangle + \ldots + \langle u_{\ell},v_{\ell} \rangle $$where $u_i,v_i$ are the respective coefficient vectors of the polynomials.
def dot_Rql(v, w):
"""Compute dot product of polynomial vector in R_q^l according to definition"""
return Rql_2_ZZl(v).dot_product(Rql_2_ZZl(w))
In this section we provide the code for sampling polynomials from random distributions. Note that our sampling methods are not secure and should not be used in practice. They are just placeholders to show the feasibility of the protocols.
The Gaussian distribution on $\mathcal{R}^{\ell}$ centered around $\mathbf{v} \in \mathcal{R}^{\ell}$ with standard deviation $\mathfrak{s} > 0$ is given by:
$$ D_{\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z})=\frac{e^{-\|\mathbf{z}-\mathbf{v}\|^2 / 2 \mathfrak{s}^2}}{\sum_{\mathbf{z}^{\prime} \in \mathcal{R}^{\ell}} e^{-\left\|\mathbf{z}^{\prime}\right\|^2 / 2 \mathfrak{s}^2}} $$When it is centered around $\mathbf{0} \in \mathcal{R}^{\ell}$ we write $D_{\mathfrak{s}}^{\ell} = D_{\mathbf{0}, \mathfrak{s}}^{\ell}$.
class GaussianSampler:
"""Sample from discrete gaussian distribution"""
def __init__(self, R, s, l):
self.R = R
self.D = DiscreteGaussianDistributionPolynomialSampler(self.R, n=d, sigma=s)
self.l = l
def get(self):
return sg.vector(self.R, [self.D() for _ in range(self.l)])
The binomial distribution with a positive integer parameter $k$, written as ${\text{Bin}}_k$ is the distribution
$$ \sum_{i = 1}^{k}(a_i - b_i),\quad \text{where}\; a_i, b_i \gets \{0, 1\}. $$The variance of this distribution is $k/2$ and it holds that
$$ \text{Bin}_{k_1} \pm {\text{Bin}}_{k_2} = \text{Bin}_{k_1 \pm k_2}. $$def rand_ZZ_mat_binomial(rows, cols, k):
"""Sample an integer matrix from a binomial distribution"""
return sg.matrix(sg.ZZ, np.random.binomial(k, 0.5, (rows, cols)))
Here, we define some convenience functions for sampling uniformly random polynomial vectors and matrices.
def rand_Rql(l):
"""Sample uniformly random polynomial vector in R_q^l"""
return sg.vector(R_q, [R_q(random.choices(range(q), k=d)) for _ in range(l)])
def rand_Rql_small(l, max_coeff):
"""Sample uniformly random polynomial vector in R_q^l with only inf_norm_Zq <= max_coeff coefficients"""
return sg.vector(R_q, [R_q(random.choices(range(-max_coeff, max_coeff + 1), k=d)) for _ in range(l)])
def rand_Rql_bin(l):
"""Sample uniformly random binary polynomial vector in R_q^l"""
return sg.vector(R_q, [R_q(random.choices([0, 1], k=d)) for _ in range(l)])
def rand_Rql_first_zero(l):
"""Sample uniformly random polynomial vector in R_q^l where first element (constant coefficient) is 0"""
return sg.vector(R_q, [R_q([0] + random.choices(range(q), k=d-1)) for _ in range(l)])
def rand_Rq_mat(rows, cols):
"""Sample uniformly random polynomial matrix in R_q^{m x n} with only inf_norm_Zq <= 1 coefficients"""
return sg.matrix(R_q, [[R_q(random.choices(range(q), k=d)) for col in range(cols)] for row in range(rows)])
def rand_Rq_mat_first_zero(rows, cols):
"""Sample uniformly random polynomial matrix in R_q^l where first element (constant coefficient) is 0"""
return sg.matrix(R_q, [[R_q([0] + random.choices(range(q), k=d-1)) for col in range(cols)] for row in range(rows)])
def rand_Zq_mat(rows, cols):
"""Sample uniformly random matrix in Z_q^{m x n}"""
return sg.matrix(IR_q, [[IR_q(random.randrange(q)) for col in range(cols)] for row in range(rows)])
As explained in the introduction, we need rejection sampling in our protocols, because of the smallness requirement of the norm in lattice cryptography. We sample the randomness $\mathbf{y}$ from a Gaussian distribution. If we would just set $\mathbf{z} = \mathbf{y} + c \mathbf{s}$ then $\mathbf{z}$ would leak information about the secret $\mathbf{s}$. Rejection sampling helps us to adjust the distribution of $\mathbf{z}$, such that it does not leak information anymore. In this section, we assume that the challenge $c$ is small and therefore $\mathbf{v} = c \mathbf{s}$ is small. The variables $\mathbf{z}$ and $\mathbf{v}$ are passed to the rejection sampling algorithm as parameters. If the algorithm rejects a given $\mathbf{z}$, the protocol has to be restarted from the beginning.
Repetition rate $M$ needs to be chosen to be an upper bound on
$$ \frac{D_{\mathfrak{s}}^{\ell}(\mathbf{z})}{D_{\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z})}=\exp \left(\frac{-2\langle\mathbf{z}, \mathbf{v}\rangle+\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right) \leqslant M. $$This is the standard rejection algorithm proposed by [Lyu12].
$$ \frac{D_{\mathfrak{s}}^{\ell}(\mathbf{z})}{D_{\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z})}=\exp \left(\frac{-2\langle\mathbf{z}, \mathbf{v}\rangle+\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right) \leqslant \exp \left(\frac{28 \mathfrak{s}\|\mathbf{v}\|+\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right)=M. $$Here, we used the fact that with probability at least $1 - 2^{128}$, we have $|\langle\mathbf{z}, \mathbf{v}\rangle|<14 \,\mathfrak{s}\, \|\mathbf{v}\|$ for $\mathbf{z} \leftarrow D_{\mathfrak{s}}^{\ell}$.
def rej1_M(gamma):
return sg.exp(sg.sqrt(2*(kappa + 1) / sg.log(sg.e, 2)) * (1 / gamma) + (1 / (2 * gamma**2)))
def rej1(z, v, std, M):
"""Rej1 rejection sampling algorithm"""
v_norm = norm_Rql(v)
expr = (1/M) * sg.exp(((-2)*dot_Rql(z, v) + (v_norm**2)) / (2*(std**2)))
u = random.random()
return u > expr
Here, we test the correctness of the Rej1 rejection sampling code. We expect a repetition rate $M \approx 3$ when we set $\mathfrak{s}=13\|\mathbf{v}\|$.
def rej1_M_test(std, v_norm):
"""Compute repetition rate M for Rej1 algorithm"""
return sg.exp((28*std*v_norm + v_norm**2) / (2*(std**2)))
def test_rej1(num_tests):
ell = 8 # polynomial ring vector dimension
# Generate dummy v (in the full protocol v is obtained by multiplying the challenge c with the secret s)
v = rand_Rql(ell)
v_norm = norm_Rql(v)
print(f"v_norm = {float(v_norm)}")
gamma = 13
std = gamma * v_norm # standard deviation
print(f"std = {float(std)}")
M = rej1_M_test(std, v_norm) # repetition rate
print(f"M = {float(M)}")
smplr = GaussianSampler(R, std, ell)
rejections = 0
for _ in range(num_tests):
y = smplr.get()
z = y + v
if rej1(z, v, std, M):
rejections += 1
if rejections == num_tests:
print("All rejected")
else:
success_prob = 1 - (rejections / num_tests)
print(f"Expected success prob: {float((1 - 2**-128) / M)}")
print(f"Success prob: {float(success_prob)}")
avg_reps = 1 / success_prob
print(f"Avg repetitions: {float(avg_reps)}")
#test_rej1(500)
This modified rejection sampling algorithm proposed by [LNS21a] manages to reduce the standard deviation by more than a factor of 10, which gives us shorter proofs. However, the expected number of rejections is at most $2M$ and the sign of $\langle\mathbf{z}, \mathbf{v}\rangle$ is leaked, which makes this only useable for one-time use commitments.
With assumption $\langle\mathbf{z}, \mathbf{v}\rangle \geqslant 0$ we can set $M$ by:
$$ \frac{D_{\mathfrak{s}}^{\ell}(\mathbf{z})}{D_{\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z})}=\exp \left(\frac{-2\langle\mathbf{z}, \mathbf{v}\rangle+\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right) \leqslant \exp \left(\frac{\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right)=M $$def rej2_M(gamma):
return sg.exp((1 / (2 * gamma**2)))
def rej2(z, v, std, M):
"""Rej2 rejection sampling algorithm"""
if dot_Rql(z, v) < 0:
return True
v_norm = norm_Rql(v)
expr = (1/M) * sg.exp(((-2)*dot_Rql(z, v) + (v_norm**2)) / (2*(std**2)))
u = random.random()
return u > expr
Here, we test the correctness of the Rej2 rejection sampling code. We expect a repetition rate $M \approx 3$ when we set $\mathfrak{s}=0.675\|\mathbf{v}\|$.
def rej2_M_test(std, v_norm):
"""Compute repetition rate M for Rej2 algorithm"""
return sg.exp((v_norm**2) / (2*(std**2)))
def test_rej2(num_tests):
ell = 8 # polynomial ring vector dimension
# Generate dummy v (in the full protocol v is obtained by multiplying the challenge c with the secret s)
v = rand_Rql(ell)
v_norm = norm_Rql(v)
print(f"v_norm = {float(v_norm)}")
gamma = 0.675
std = gamma * v_norm # standard deviation
print(f"std = {float(std)}")
M = rej2_M_test(std, v_norm) # repetition rate
print(f"M = {float(M)}")
smplr = GaussianSampler(R, std, ell)
rejections = 0
for _ in range(num_tests):
y = smplr.get()
z = y + v
if rej2(z, v, std, M):
rejections += 1
if rejections == num_tests:
print("All rejected")
else:
success_prob = 1 - (rejections / num_tests)
print(f"Expected success prob: {float(1/(2*M))}")
print(f"Success prob: {float(success_prob)}")
avg_reps = 1 / success_prob
print(f"Avg repetitions: {float(avg_reps)}")
#test_rej2(500)
Bimodal rejection sampling as in [DDLL13] significantly reduces the needed standard deviation. However, this requires us to additionally commit to a sign $\beta \in \{-1, 1\}$, for which we also need to construct a proof that it is in the specified range.
Similarly to Rej2, we can reduce the standard deviation in the following way:
$$ \frac{D_{\mathfrak{s}}^{\ell}(\mathbf{z})}{\frac{1}{2} D_{\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z}) + \frac{1}{2} D_{-\mathbf{v}, \mathfrak{s}}^{\ell}(\mathbf{z})}=\exp \left(\frac{\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right) / \cosh \left(\frac{\langle\mathbf{z}, \mathbf{v}\rangle}{\mathfrak{s}^2}\right) \leqslant \exp \left(\frac{\|\mathbf{v}\|^2}{2 \mathfrak{s}^2}\right) = M $$def rej_bimodal_M(gamma):
return sg.exp((1 / (2 * gamma**2)))
def rej_bimodal(z, v, std, M):
"""Bimodal rejection sampling algorithm"""
v_norm = norm_Rql(v)
expr = 1 / (M * sg.exp(-(v_norm**2) / (2*(std**2))) * sg.cosh(dot_Rql(z, v) / (std**2)))
u = random.random()
return u > expr
Here, we test the correctness of the bimodal rejection sampling code. We expect a repetition rate $M \approx 3$ when we set $\mathfrak{s}=0.675\|\mathbf{v}\|$.
def rej_bimodal_M_test(std, v_norm):
"""Compute repetition rate M for bimodal rejection sampling algorithm"""
return sg.exp((v_norm**2) / (2*(std**2)))
def test_rej_bimodal(num_tests):
ell = 8 # polynomial ring vector dimension
# Generate dummy v (in the full protocol v is obtained by multiplying the challenge c with the secret s)
v = rand_Rql(ell)
v_norm = norm_Rql(v)
print(f"v_norm = {float(v_norm)}")
gamma = 0.675
std = gamma * v_norm # standard deviation
print(f"std = {float(std)}")
M = rej_bimodal_M_test(std, v_norm) # repetition rate
print(f"M = {float(M)}")
smplr = GaussianSampler(R, std, ell)
rejections = 0
for _ in range(num_tests):
beta = random.choice([0, 1])
y = smplr.get()
z = y + (-1)**beta * v
if rej_bimodal(z, v, std, M):
rejections += 1
if rejections == num_tests:
print("All rejected")
else:
success_prob = 1 - (rejections / num_tests)
print(f"Expected success prob: {float(1/(M))}")
print(f"Success prob: {float(success_prob)}")
avg_reps = 1 / success_prob
print(f"Avg repetitions: {float(avg_reps)}")
#test_rej_bimodal(500)
The challenge space $\mathcal{C}$ from which we sample the challenge polynomials in our protocols is defined as:
$$ \mathcal{C} := \{ c \in S_{\omega}^{\sigma} : \sqrt{\left\|\sigma_{-1}\left(c\right)c\right\|_1}\leqslant\eta \} $$where $\eta$ is a smallness bound and $\sigma \in \{\sigma_{1}, \sigma_{-1} \}$ is an automorphism of the polynomial ring $\mathcal{R}_q$ defined as $\sigma_i (X) := X^i$.
For example, assume we have a polynomial
$$ a = a_0 + a_1 X + a_2 X^2 + ... + a_{d-1} X^{d-1} $$Automorphism $\sigma_{-1}$ changes the polynomial to
$$ \sigma_{-1}(a) = a_0 + a_1 X^{-1} + a_2 X^{-2} + ... + a_{d-1} X^{-(d-1)} $$$$ = a_0 - a_{d-1} X - a_{d-2} X^2 - ... - a_1 X^{d-1}. $$Automorphism $\sigma_{1}$ does not change the polynomial.
Define the set $S_{\omega}^{\sigma}$ as a subset of $S_{\omega}$, stable under the automorphism $\sigma$:
$$ S_{\omega}^{\sigma} := \{c \in S_{\omega} : \sigma(c) = c \} $$where the set $S_{\omega}$ is a subset of the ring $\mathcal{R}_q$ consisting of polynomials with small coefficients:
$$ S_{\omega} := \{ x \in \mathcal{R}_q : \|x\|_{\infty} \leqslant \omega \}. $$For example with $\omega = 1$ only $\{-1, 0, 1\}$ coefficients are allowed, with $\omega = 2$ only $\{-2, -1, 0, 1, 2\}$. Note that small negative integers modulo $q$ become large, but because of the way we defined the infinity norm, this results in small norm polynomials.
def sigma_n1(poly):
"""Apply sigma_(-1) automorphism to a polynomial"""
return R_q([poly[0]] + [-coeff for coeff in reversed(list(poly)[1:])])
def sigma_n1_vec(vec):
"""Apply sigma_(-1) automorphism to a vector of polynomials"""
return sg.vector(R_q, [sigma_n1(poly) for poly in vec])
def sigma_n1_mat(mat):
"""Apply sigma_(-1) automorphism to a matrix of polynomials"""
return sg.matrix(R_q, [[sigma_n1(poly) for poly in vec] for vec in mat])
For various applications, we will need to prove relations between the images of polynomial vectors under an automorphism $\sigma \in \text{Aut}(\mathcal{R})$. For this, we define the structure:
$$ \langle x \rangle_{\sigma} := (x, \sigma (x), ..., \sigma^{k - 1} (x)) \in \mathcal{R}_q^k \quad \text{for}\; x \in \mathcal{R}_q, $$where $k$ is the degree of the automorphism ( $ k_{\sigma_1} = 1,\; k_{\sigma_{-1}} = 2 $ ). Note that $\langle x \rangle_{\sigma_1} = x$, therefore this is only relevant if we use $\sigma_{-1}$.
Similarly for a vector $\mathbf{x} = (x_1, ..., x_n)$ we define:
$$ \langle \mathbf{x} \rangle_{\sigma} := (\langle x_1 \rangle_{\sigma}, ..., \langle x_n \rangle_{\sigma}) \in \mathcal{R}_q^{kn}. $$For any $\mathbf{x}, \mathbf{y} \in \mathcal{R}_q^n$ and any $c \in \mathcal{R}_q$ such that $\sigma (c) = c$ it holds that:
$$ \langle \mathbf{x} || \mathbf{y} \rangle_{\sigma} = \langle \mathbf{x} \rangle_{\sigma} || \langle \mathbf{y} \rangle_{\sigma} \quad \text{and} \quad \langle \mathbf{x} + c \mathbf{y} \rangle_{\sigma} = \langle \mathbf{x} \rangle_{\sigma} + c \langle \mathbf{y} \rangle_{\sigma} $$k_sigma_1 = 1
k_sigma_n1 = 2
def sigma_n1_struct_poly(poly):
"""Create the <x>_(sigma_(-1)) structure for a polynomial"""
return sg.vector(R_q, [poly, sigma_n1(poly)])
def sigma_n1_struct_vec(vec):
"""Create the <x>_(sigma_(-1)) structure for a vector (doubles vector size)"""
return sg.vector(R_q, [f(poly) for poly in vec for f in (lambda x: x, sigma_n1)])
Challenge variant 1 uses the automorphism $\sigma = \sigma_{1}$, which is an identity function.
Example parameters for a reasonably sized challenge space are
$$ \omega = 1,\, \eta = 27,\, |S_{\omega}^{\sigma}| = 2^{202},\, |C| = 2^{201}. $$eta_challenge_v1 = 27 # smallness bound
def rand_c_v1():
"""
Generate random challenge polynomial in R_q^l
with only inf_norm_Zq <= 1 coefficients
"""
return R_q(random.choices([-1, 0, 1], k=d))
def check_c_v1(c):
"""Return true if challenge polynomial fits the criteria"""
k = 32
c_pow_k = c**k
norm = p_norm_Rq(sigma_n1(c_pow_k) * c_pow_k, 1)
return bool(norm**(1 / (2 * k)) <= eta_challenge_v1)
def get_challenge_v1():
c = rand_c_v1()
while not check_c_v1(c):
c = rand_c_v1()
return c
def test_small_norm_lemma(get_challenge_func):
"""
Test if norm does not increase by more than fixed factor
after multiplying a challenge polynomial to a random vector in Rq^l
"""
test_c = get_challenge_func()
test_r = rand_Rql(8)
norm_r = norm_Rql(test_r)
print(f"r norm: {float(norm_r)}")
res = test_c * test_r
norm_res = norm_Rql(res)
print(f"result norm: {float(norm_res)}")
norm_increase = norm_res / norm_r
print(f"norm increase: {float(norm_increase)}")
k = 32
norm_bound = p_norm_Rq(sigma_n1(test_c**k) * test_c**k, 1)**(1/(2*k))
print(f"norm bound: {float(norm_bound)}")
assert(norm_increase < norm_bound)
#test_small_norm_lemma(get_challenge_v1)
Challenge variant 2 uses automorphism $\sigma = \sigma_{-1}$.
Example parameters for reasonably sized challenge space are
$$ \omega = 2,\, \eta = 59,\, |S_{\omega}^{\sigma}| = 2^{148},\, |C| = 2^{147}. $$eta_challenge_v2 = 59 # smallness bound
def rand_c_v2():
"""
Generate random challenge polynomial in R_q^l
with only inf_norm_Zq <= 2 coefficients.
In order to satisfy sigma_n1(c) == c it must have symmetric coefficients
"""
first_half = random.choices([-2, -1, 0, 1, 2], k=d//2)
second_half = [-coeff for coeff in first_half[1:]]
second_half.reverse()
second_half = [0] + second_half
return R_q(first_half + second_half)
def check_c_v2(c):
"""Return true if challenge polynomial fits the criteria"""
k = 32
c_pow_k = c**k
norm = p_norm_Rq(sigma_n1(c_pow_k) * c_pow_k, 1)
return bool(norm**(1 / (2 * k)) <= eta_challenge_v2)
def get_challenge_v2():
c = rand_c_v2()
while not check_c_v2(c):
c = rand_c_v2()
return c
#test_small_norm_lemma(get_challenge_v2)
Here, we provide a protocol that gives an opening proof for the ABDLOP commitment scheme and proves a linear relation over $\mathcal{R}_q$.
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} \in \mathcal{R}_{q}^{\ell},\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{R}_{1} \in {\mathcal{R}}_{q}^{N \times (m_{1} + \ell)},\, \mathbf{r}_0 \in \mathcal{R}_{q}^{N} $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{1} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} $$Prove knowledge of $\mathbf{s}_1$, $\mathbf{m}$ and prove linear relations over $\mathcal{R}_q$:
$$ \mathbf{R}_{1} {\begin{bmatrix}{\mathbf{s_1}} \\ {\mathbf{m}}\end{bmatrix}} + \mathbf{r}_0 = 0 $$def abdlop_commit(m1, m2, ell, n, N,
rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, tA, tB,
R1, r0):
"""
ABDLOP opening proof with linear proofs over Rq
"""
#================================ Prover =================================
y1_smplr = GaussianSampler(R_q, std1, m1)
y1 = y1_smplr.get()
y2_smplr = GaussianSampler(R_q, std2, m2)
y2 = y2_smplr.get()
w = A1 * y1 + A2 * y2
v = R1 * stack_vec_Rql([y1, -B * y2])
#=============================== Verifier ================================
c = get_challenge_u()
#================================ Prover =================================
cs1 = c * s1
z1 = cs1 + y1
if rej_u_1(z1, cs1, std1, rep_M1):
return "Rejected"
cs2 = c * s2
z2 = cs2 + y2
if rej_u_2(z2, cs2, std2, rep_M2):
return "Rejected"
#=============================== Verifier ================================
cond1 = (norm_Rql(z1) <= std1 * sg.sqrt(2 * m1 * d)) and (norm_Rql(z2) <= std2 * sg.sqrt(2 * m2 * d))
cond2 = (A1 * z1 + A2 * z2 - c * tA == w)
cond3 = (R1 * stack_vec_Rql([z1, c * tB - B * z2]) + c * r0 == v)
return cond1 and cond2 and cond3
Here, we provide example parameters to test the ABDLOP commitment opening proof with linear relations over $\mathcal{R}_q$.
def test_abdlop_commit():
"""Call abdlop_commit with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
N = 1 # number of linear functions to prove
# Algorithm parameters
is_sigma_1 = False # automorphism: True for sigma_1, False for sigma_n1
if is_sigma_1:
eta = eta_challenge_v1 # challenge norm upper bound
get_challenge_u = get_challenge_v1 # challenge sampling algorithm
else:
eta = eta_challenge_v2
get_challenge_u = get_challenge_v2
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * alpha_s1 # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
global rep_rate_abdlop_commit
rep_rate_abdlop_commit = rep_M1 * 2*rep_M2
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
# Public information
A1 = rand_Rq_mat(n, m1) # ABDLOP A1
A2 = rand_Rq_mat(n, m2) # ABDLOP A2
B = rand_Rq_mat(ell, m2) # ABDLOP B
# ABDLOP commitment
tA_tB = A1.stack(ZM(ell, m1)) * s1 + A2.stack(B) * s2 + stack_vec_Rql([zv(n), m])
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
# Linear functions
R1 = rand_Rq_mat(N, m1 + ell)
r0 = -(R1 * stack_vec_Rql([s1, m]))
return abdlop_commit(m1, m2, ell, n, N,
rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, tA, tB,
R1, r0)
def repeat_until_accept(protocol_func):
"""Repeat protocol until it is not rejected anymore"""
result = protocol_func()
reps = 1
while result == "Rejected":
result = protocol_func()
reps += 1
return {"success": result, "reps": reps}
def measure_rep_rate(protocol_func, exp_rep_rate, runs=100):
"""Measure repetition rate of protocol"""
print(f"Expected repetition rate: {float(exp_rep_rate)}")
results = [protocol_func() for _ in range(runs)]
result_dict = {res:results.count(res) for res in set(results)}
print(f"Measured repetition rate: {float(runs / result_dict[True])}")
print(f"Results: {result_dict}")
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_commit)
# Measure repetition rate
#measure_rep_rate(test_abdlop_commit, rep_rate_abdlop_commit)
We transform the previous protocol which proves that committed values satisfy a linear relation over $\mathcal{R}_q$ into one that proves knowledge of the constant coefficient of a linear relation of $\mathcal{R}_q$.
If $\vec{r}, \vec{s} \in \mathbb{Z}_q$ are vectors consisting of the coefficients of polynomials $r, s \in \mathcal{R}_q$ respectively, then $\langle \vec{r}, \vec{s} \rangle \mod q$ is equal to the constant coefficient of the polynomial product $r(X^{-1}) * s(X)$.
The procedure to prove that $\langle \vec{r}, \vec{s} \rangle \mod q = \alpha$ is to create polynomial vectors $\mathbf{r}, \mathbf{s} \in \mathcal{R}_q^k$, such that $\widetilde{\langle \mathbf{r}, \mathbf{s} \rangle}$ (inner product over $\mathcal{R}_q$) is equal to $\langle \vec{r}, \vec{s} \rangle$.
Due to the need to mask all but the constant coefficient, additional commitments are created during the proof by appending these to the BDLOP part of the commitment scheme using public randomness $\mathbf{B}_g$.
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1}, \mathbf{m} \in \mathcal{R}_{q}^{\ell}, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{\ell \times m_{2}},\, \mathbf{B}_{g} \in {\mathcal{R}}_{q}^{\lambda \times m_{2}} $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{1} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} , $$Prove knowledge of $\mathbf{s} = \mathbf{s}_{1} || \mathbf{m}$ and prove linear relations over $\mathcal{R}_q$:
$$ \mathbf{R}_{1} \in {\mathcal{R}}_{q}^{N \times (m_{1} + \ell)},\, \mathbf{r}_0 \in \mathcal{R}_{q}^{N} $$$$ \mathbf{R}_1 \mathbf{s} + \mathbf{r}_0 = \mathbf{0} $$and prove that the constant coefficient is $0$ for:
$$ \mathbf{u}_{1,1}^{T}, \dots, \mathbf{u}_{M,1}^{T} \in \mathcal{R}_{q}^{m_1 + \ell},\, u_{1,0}, \dots, u_{M,0} \in \mathcal{R}_{q} $$$$ \mathbf{u}_{i,1}^{T} \mathbf{s} + u_{i,0} = 0 \quad \,\text{for}\; i = 1,\dots,M $$def abdlop_linear(m1, m2, ell, n, N, M, lambd,
rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, s, A1, A2, B, Bg, tA, tB,
R1, r0, u1, u0):
"""ABDLOP linear proofs over Zq"""
#================================ Prover =================================
g = rand_Rql_first_zero(lambd)
tg = Bg * s2 + g
#=============================== Verifier ================================
Y = rand_Zq_mat(lambd, M)
#================================ Prover =================================
h = g + Y * stack_vec_Rql([u1[i].row() * s + u0[i] for i in range(M)])
# Append Bg to B, g to m and tg to tB
tB = stack_vec_Rql([tB, tg])
B = B.stack(Bg)
m = stack_vec_Rql([m, g])
ell = ell + lambd
# To prove well-formedness of h[1],...,h[lambd]
u1_mat = u1[0].row()
for i in range(1, M):
u1_mat = u1_mat.stack(u1[i].row())
V1 = Y * u1_mat
v0 = Y * stack_vec_Rql([u0[i] for i in range(M)]) - h
R1 = R1.augment(ZM(N, lambd)).stack(V1.augment(IM(lambd)))
r0 = stack_vec_Rql([r0, v0])
# Run ABDLOP commit sub-protocol
cond1 = abdlop_commit(m1, m2, ell, n, N,
rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, tA, tB,
R1, r0)
if cond1 == "Rejected":
return "Rejected"
#=============================== Verifier ================================
cond2 = all(h[i][0] == 0 for i in range(lambd)) # constant coefficient is 0
return cond1 and cond2
Here, we provide example parameters to test linear proofs over $\mathbb{Z}_q$.
def test_abdlop_linear():
"""Call abdlop_linear with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
N = 1 # number of linear functions over Rq
M = 1 # number of linear functions over Zq
lambd = M # number of random challenges for each of the M linear functions (must be M)
# Algorithm parameters
is_sigma_1 = False # automorphism: True for sigma_1, False for sigma_n1
if is_sigma_1:
eta = eta_challenge_v1 # challenge norm upper bound
get_challenge_u = get_challenge_v1 # challenge sampling algorithm
else:
eta = eta_challenge_v2
get_challenge_u = get_challenge_v2
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * alpha_s1 # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
global rep_rate_abdlop_linear
rep_rate_abdlop_linear = rep_M1 * 2*rep_M2
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
s = stack_vec_Rql([s1, m])
# Public information
A1 = rand_Rq_mat(n, m1) # ABDLOP A1
A2 = rand_Rq_mat(n, m2) # ABDLOP A2
B = rand_Rq_mat(ell, m2) # ABDLOP B
Bg = rand_Rq_mat(lambd, m2) # randomness matrix
# ABDLOP commitment
tA_tB = (A1.stack(ZM(ell, m1)) * s1
+ A2.stack(B) * s2
+ stack_vec_Rql([zv(n), m]))
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
# Linear functions
R1 = rand_Rq_mat(N, m1 + ell)
r0 = -(R1 * stack_vec_Rql([s1, m]))
u1 = [rand_Rql(m1 + ell) for i in range(M)]
u0 = [-u1[i].row() * s for i in range(M)]
return abdlop_linear(m1, m2, ell, n, N, M, lambd,
rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, s, A1, A2, B, Bg, tA, tB,
R1, r0, u1, u0)
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_linear)
# Measure repetition rate
#measure_rep_rate(test_abdlop_linear, rep_rate_abdlop_linear)
Here, we provide a protocol to prove a single quadratic relation using an ABLDOP commitment opening proof.
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} \in \mathcal{R}_{q}^{\ell},\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{\ell \times m_{2}},\, \mathbf{b} \in {\mathcal{R}}_{q}^{m_{2}}, $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{1} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} , $$Prove knowledge of $\mathbf{s} = {\begin{bmatrix}{\mathbf{s_1}} \\ {\mathbf{m}}\end{bmatrix}}$ and prove quadratic relation over $\mathcal{R}_q$:
$$ r_0 \in \mathcal{R}_{q},\, \mathbf{r}_1 \in \mathcal{R}_{q}^{k(m_1 + \ell)},\, \mathbf{R}_2 \in \mathcal{R}_{q}^{k(m_1 + \ell) \times k(m_1 + \ell)}, $$$$ \mathbf{s}^{T}\mathbf{R}_{2}\mathbf{s} + \mathbf{r}_{1}^{T}\mathbf{s} + r_0 = 0 $$where $k$ is the degree of the used automorphism $\sigma \in \{ \sigma_{1}, \sigma_{-1} \}$.
def abdlop_single_quadratic(m1, m2, ell, n,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2, r1, r0):
"""ABDLOP proof of single quadratic relation"""
#================================ Prover =================================
s = stack_vec_Rql([s1, m])
if not is_sigma_1:
s = sigma_n1_struct_vec(s)
y1_smplr = GaussianSampler(R_q, std1, m1)
y1 = y1_smplr.get()
y2_smplr = GaussianSampler(R_q, std2, m2)
y2 = y2_smplr.get()
w = A1 * y1 + A2 * y2
y = stack_vec_Rql([y1, -B*y2])
if not is_sigma_1:
y = sigma_n1_struct_vec(y)
g1 = s.dot_product(R2 * y) + y.dot_product(R2 * s) + r1.dot_product(y)
t = b.dot_product(s2) + g1
v = y.dot_product(R2 * y) + b.dot_product(y2)
#=============================== Verifier ================================
if is_sigma_1:
c = get_challenge_v1()
else:
c = get_challenge_v2()
#================================ Prover =================================
cs1 = c * s1
z1 = cs1 + y1
if rej_u_1(z1, cs1, std1, rep_M1):
return "Rejected"
cs2 = c * s2
z2 = cs2 + y2
if rej_u_2(z2, cs2, std2, rep_M2):
return "Rejected"
#=============================== Verifier ================================
z = stack_vec_Rql([z1, c*tB - B*z2])
if not is_sigma_1:
z = sigma_n1_struct_vec(z)
f = c * t - b.dot_product(z2)
cond1 = ((norm_Rql(z1) <= std1 * sg.sqrt(2 * m1 * d))
and (norm_Rql(z2) <= std2 * sg.sqrt(2 * m2 * d)))
cond2 = (A1 * z1 + A2 * z2 - c * tA == w)
cond3 = (z.dot_product(R2 * z) + c * r1.dot_product(z) + c**2 * r0 - f == v)
return cond1 and cond2 and cond3
Here, we provide example parameters to test single quadratic relation proofs over $\mathcal{R}_q$.
def test_abdlop_single_quadratic():
"""Call abdlop_single_quadratic with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
# Algorithm parameters
is_sigma_1 = False # automorphism: True for sigma_1, False for sigma_n1
if is_sigma_1:
k = k_sigma_1 # degree of automorphism
eta = eta_challenge_v1 # challenge norm upper bound
get_challenge_u = get_challenge_v1 # challenge sampling algorithm
else:
k = k_sigma_n1
eta = eta_challenge_v2
get_challenge_u = get_challenge_v2
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * alpha_s1 # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
global rep_rate_abdlop_single_quadratic
rep_rate_abdlop_single_quadratic = rep_M1 * 2*rep_M2
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
s = stack_vec_Rql([s1, m])
if not is_sigma_1:
s = sigma_n1_struct_vec(s)
# Public information
A1 = rand_Rq_mat(n, m1) # ABDLOP A1
A2 = rand_Rq_mat(n, m2) # ABDLOP A2
B = rand_Rq_mat(ell, m2) # ABDLOP B
b = rand_Rql(m2) # randomness vector
# ABDLOP commitment
tA_tB = A1.stack(ZM(ell, m1)) * s1 + A2.stack(B) * s2 + stack_vec_Rql([zv(n), m])
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
# Quadratic function
R2 = rand_Rq_mat(k * (m1 + ell), k * (m1 + ell))
r1 = rand_Rql(k *(m1 + ell))
r0 = -(s.dot_product(R2 * s) + r1.dot_product(s))
return abdlop_single_quadratic(m1, m2, ell, n,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2, r1, r0)
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_single_quadratic)
# Measure repetition rate
#measure_rep_rate(test_abdlop_single_quadratic, rep_rate_abdlop_single_quadratic)
To prove multiple quadratic relations in $\mathcal{R}_q$ we linearly combine them into one and run the protocol for proving a single quadratic relation.
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} \in \mathcal{R}_{q}^{\ell},\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{\ell \times m_{2}},\, \mathbf{b} \in {\mathcal{R}}_{q}^{m_{2}}, $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{1} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} $$Prove knowledge of $\mathbf{s} = {\begin{bmatrix}{\mathbf{s_1}} \\ {\mathbf{m}}\end{bmatrix}}$ and prove $N$ quadratic relations over $\mathcal{R}_q$:
$$ f_1,...,f_N : \mathcal{R}_{q}^{k(m_1 + \ell)} \rightarrow \mathcal{R}_{q} $$where $k$ is the degree of the used automorphism $\sigma \in \{ \sigma_{1}, \sigma_{-1} \}$.
def abdlop_multiple_quadratic(m1, m2, ell, n, N,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2_is, r1_is, r0_is):
"""ABDLOP proof of multiple quadratic relations"""
#=============================== Verifier ================================
mu_is = rand_Rql(N)
#================================ Prover =================================
# Linear-combine the N equations into one
R2 = sum([mu_is[i] * R2_is[i] for i in range(N)])
r1 = sum([mu_is[i] * r1_is[i] for i in range(N)])
r0 = sum([mu_is[i] * r0_is[i] for i in range(N)])
# Run adblop single quadratic with combined equation
return abdlop_single_quadratic(m1, m2, ell, n,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2, r1, r0)
Here, we provide example parameters to test proofs of multiple quadratic relations over $\mathcal{R}_q$.
def test_abdlop_multiple_quadratic():
"""Call abdlop_multiple_quadratic with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
N = 2 # number of quadratic equations
# Algorithm parameters
is_sigma_1 = False # automorphism: True for sigma_1, False for sigma_n1
if is_sigma_1:
k = k_sigma_1 # degree of automorphism
eta = eta_challenge_v1 # challenge norm upper bound
get_challenge_u = get_challenge_v1 # challenge sampling algorithm
else:
k = k_sigma_n1
eta = eta_challenge_v2
get_challenge_u = get_challenge_v2
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * alpha_s1 # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
global rep_rate_abdlop_multiple_quadratic
rep_rate_abdlop_multiple_quadratic = rep_M1 * 2*rep_M2
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
s = stack_vec_Rql([s1, m])
if not is_sigma_1:
s = sigma_n1_struct_vec(s)
# Public information
A1 = rand_Rq_mat(n, m1) # ABDLOP A1
A2 = rand_Rq_mat(n, m2) # ABDLOP A2
B = rand_Rq_mat(ell, m2) # ABDLOP B
b = rand_Rql(m2) # randomness vector
# ABDLOP commitment
tA_tB = A1.stack(ZM(ell, m1)) * s1 + A2.stack(B) * s2 + stack_vec_Rql([zv(n), m])
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
# Quadratic functions
R2_is = [rand_Rq_mat(k * (m1 + ell), k * (m1 + ell)) for i in range(N)]
r1_is = [rand_Rql(k * (m1 + ell)) for i in range(N)]
r0_is = [-(s.dot_product(R2_is[i] * s) + r1_is[i].dot_product(s)) for i in range(N)]
return abdlop_multiple_quadratic(m1, m2, ell, n, N,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2_is, r1_is, r0_is)
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_multiple_quadratic)
# Measure repetition rate
#measure_rep_rate(test_abdlop_multiple_quadratic, rep_rate_abdlop_multiple_quadratic)
Here, we make the step from relations over $\mathcal{R}_q$ to relations over $\mathbb{Z}_q$ by proving that the constant coefficient of some polynomials is $0$. The provided protocol allows us to prove simultaneously $N$ quadratic relations and that $M$ quadratic polynomials have evaluations with constant coefficient zero.
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} \in \mathcal{R}_{q}^{\ell},\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{\ell \times m_{2}},\, \mathbf{B}_{g} \in {\mathcal{R}}_{q}^{\lambda \times m_{2}},\, \mathbf{b} \in {\mathcal{R}}_{q}^{m_{2}}, $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{1} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} , $$$$ f_1,...,f_N, F_1,...,F_M : \mathcal{R}_{q}^{k(m_1 + \ell)} \rightarrow \mathcal{R}_{q} $$where $k$ is the degree of the used automorphism $\sigma \in \{ \sigma_{1}, \sigma_{-1} \}$.
def Tr(poly):
"""
Create polynomials such that for any a,b in R_q,
the polynomial y = Tr(a) + X^d/2 * Tr(b) has y_0 = 0 and y_d/2 = 0
"""
return (poly + sigma_n1(poly)) / 2
def get_U(n):
"""Create permutation matrix that swaps every 2nd element of the vector it is multiplied with"""
P = IM(n)
for i in range(1, n, 2):
P.swap_rows(i - 1, i)
return P
def abdlop_quadratic_poly(m1, m2, ell, n, k, N, M, lambd,
is_sigma_1, is_opti, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, Bg, b, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is):
"""
ABDLOP proof of
- multiple quadratic relations
- multiple quadratic polynomials have evaluations with constant coefficient zero
"""
#================================ Prover =================================
s = stack_vec_Rql([s1, m])
if not is_sigma_1:
s = sigma_n1_struct_vec(s)
g = rand_Rql_first_zero(lambd)
tg = Bg * s2 + g
#=============================== Verifier ================================
if not is_opti:
Y = rand_Zq_mat(lambd, M)
else:
Y = rand_Zq_mat(2*lambd, M)
#================================ Prover =================================
if not is_opti:
h = [g[i] + sum([Y[i][j] * (s.dot_product(R2_p_is[j] * s) + r1_p_is[j].dot_product(s) + r0_p_is[j])
for j in range(M)])
for i in range(lambd)]
else:
h = [g[i] + sum([(Y[2*i][j] + X_q**(d//2) * Y[2*i + 1][j])
* Tr(s.dot_product(R2_p_is[j] * s) + r1_p_is[j].dot_product(s) + r0_p_is[j])
for j in range(M)])
for i in range(lambd)]
# Append Bg to B, g to m and tg to tB
tB = stack_vec_Rql([tB, tg])
B = B.stack(Bg)
m = stack_vec_Rql([m, g])
R2_is = [sg.block_matrix(R_q, [
[R2_is[i], ZM(k*(m1 + ell), k*lambd)],
[ZM(k*lambd, k*(m1 + ell)), ZM(k*lambd, k*lambd)]
]) for i in range(N)]
r1_is = [stack_vec_Rql([r1_is[i], zv(k*lambd)]) for i in range(N)]
if is_sigma_1:
e = [sg.vector(R_q, [1 if j == i else 0 for j in range(lambd)]) for i in range(lambd)]
else:
e = [sg.vector(R_q, [1 if j == 2 * i else 0 for j in range(k*lambd)]) for i in range(lambd)]
if not is_opti:
R2_is.extend([sg.block_matrix(R_q, [
[sum([Y[i][j] * R2_p_is[j] for j in range(M)]), ZM(k*(m1 + ell), k*lambd)],
[ZM(k*lambd, k*(m1 + ell)), ZM(k*lambd, k*lambd)]
]) for i in range(lambd)])
r1_is.extend([stack_vec_Rql([
sum([Y[i][j] * r1_p_is[j] for j in range(M)]),
e[i]
]) for i in range(lambd)])
r0_is.extend([sum([Y[i][j] * r0_p_is[j] for j in range(M)]) - h[i] for i in range(lambd)])
N = N + lambd
ell = ell + lambd
else:
U = get_U(2*(m1 + ell))
Ut = U.transpose()
Y_ijs = [[(Y[2*i][j] + X_q**(d//2) * Y[2*i + 1][j]) / 2 for j in range(M)] for i in range(lambd)]
R2pU_js = [R2_p_is[j] + Ut * sigma_n1_mat(R2_p_is[j]) * U for j in range(M)]
r1p_js = [r1_p_is[j] + Ut * sigma_n1_vec(r1_p_is[j]) for j in range(M)]
r0p_js = [r0_p_is[j] + sigma_n1(r0_p_is[j]) for j in range(M)]
R2_is.extend([sg.block_matrix(R_q, [
[sum([Y_ijs[i][j] * R2pU_js[j] for j in range(M)]), ZM(k*(m1 + ell), k*lambd)],
[ZM(k*lambd, k*(m1 + ell)), ZM(k*lambd, k*lambd)]
]) for i in range(lambd)])
r1_is.extend([stack_vec_Rql([
sum([Y_ijs[i][j] * r1p_js[j] for j in range(M)]),
e[i]
]) for i in range(lambd)])
r0_is.extend([sum([Y_ijs[i][j] * r0p_js[j] for j in range(M)]) - h[i] for i in range(lambd)])
N = N + lambd
ell = ell + lambd
# Run ABDLOP multiple quadratic sub-protocol
cond1 = abdlop_multiple_quadratic(m1, m2, ell, n, N,
is_sigma_1, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, b, tA, tB,
R2_is, r1_is, r0_is)
if cond1 == "Rejected":
return "Rejected"
#=============================== Verifier ================================
cond2 = all(h[i][0] == 0 for i in range(lambd)) # check constant coefficient is 0
return cond1 and cond2
Here, we provide example parameters to test quadratic proofs over $\mathbb{Z}_q$.
def test_abdlop_quadratic_poly():
"""Call abdlop_quadratic_poly with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
N = 1 # number of quadratic euqations
M = 1 # number of quadratic polynomials with constant coefficient zero
lambd = 4 # number of random masking polynomials (boosting soundness to q1^-lambda)
# Algorithm parameters
is_sigma_1 = False # automorphism: True for sigma_1, False for sigma_n1
is_opti = True # proof size optimized sigma_n1 version with reduced garbage commitments
assert(not(is_sigma_1 and is_opti)) # is_opti only works with sigma_n1
if is_sigma_1:
k = k_sigma_1 # degree of automorphism
eta = eta_challenge_v1 # challenge norm upper bound
get_challenge_u = get_challenge_v1 # challenge sampling algorithm
else:
k = k_sigma_n1
eta = eta_challenge_v2
get_challenge_u = get_challenge_v2
if is_opti:
lambd = lambd//2 # reduced garbage commitments
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * alpha_s1 # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
global rep_rate_abdlop_quadratic_poly
rep_rate_abdlop_quadratic_poly = rep_M1 * 2*rep_M2
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
s = stack_vec_Rql([s1, m])
if not is_sigma_1:
s = sigma_n1_struct_vec(s)
# Public information
A1 = rand_Rq_mat(n, m1) # ABDLOP A1
A2 = rand_Rq_mat(n, m2) # ABDLOP A2
B = rand_Rq_mat(ell, m2) # ABDLOP B
Bg = rand_Rq_mat(lambd, m2) # randomness matrix
b = rand_Rql(m2) # randomness vector
# ABDLOP commitment
tA_tB = A1.stack(ZM(ell, m1)) * s1 + A2.stack(B) * s2 + stack_vec_Rql([zv(n), m])
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
# Quadratic functions
R2_is = [rand_Rq_mat(k * (m1 + ell), k * (m1 + ell)) for i in range(N)]
r1_is = [rand_Rql(k * (m1 + ell)) for i in range(N)]
r0_is = [-(s.dot_product(R2_is[i] * s) + r1_is[i].dot_product(s)) for i in range(N)]
# Quadratic polynomials with constant coefficient zero
R2_p_is = [rand_Rq_mat_first_zero(k * (m1 + ell), k * (m1 + ell)) for i in range(M)]
r1_p_is = [rand_Rql_first_zero(k *(m1 + ell)) for i in range(M)]
r0_p_is = [-(s.dot_product(R2_p_is[i] * s) + r1_p_is[i].dot_product(s)) for i in range(M)]
return abdlop_quadratic_poly(m1, m2, ell, n, k, N, M, lambd,
is_sigma_1, is_opti, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, Bg, b, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is)
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_quadratic_poly)
# Measure repetition rate
#measure_rep_rate(test_abdlop_quadratic_poly, rep_rate_abdlop_quadratic_poly)
Here, we provide the full protocol for proving knowledge of a Module-LWE secret $\mathbf{s}_1$ such that $\mathbf{A} \mathbf{s}_1 + \mathbf{e} = \mathbf{u}$ and prove that $\| (\mathbf{s}_1, \mathbf{e}) \| \leq \mathcal{B}$
This is a simplified version of the toolbox protocol in the next section, where we only left the parts relevant for this specific proof.
In the code, we give references to the corresponding equations as described in [Ngu22] (6.25, 6.33, 6.24, 6.26, 6.29).
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} = [\,],\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2},\, \mathbf{\vartheta} = \vartheta_1 \in \{0, 1\} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} = [\,],\, \mathbf{b} \in {\mathcal{R}}_{q}^{m_{2}}, $$$$ \mathbf{B}_{\gamma} \in {\mathcal{R}}_{q}^{256/d \times m2},\, \mathbf{B}_{\beta} \in {\mathcal{R}}_{q}^{1 \times m2},\, \mathbf{B}_{\text{ext}} \in {\mathcal{R}}_{q}^{\lambda \times m2},\, \mathbf{b}_{\text{ext}} \in {\mathcal{R}}_{q}^{m2} $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot {\begin{bmatrix}{\mathbf{s}_1} \\ {\mathbf{\vartheta}}\end{bmatrix}} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} $$Because we only need the Ajtai part of the commitment, $\mathbf{B}$ and $\mathbf{m}$ are empty and this becomes
$$ \mathbf{t}_{A} = \mathbf{A}_{1} \cdot {\begin{bmatrix}{\mathbf{s}_1} \\ {\mathbf{\vartheta}}\end{bmatrix}} + \mathbf{A}_{2} \cdot \mathbf{s}_{2}. $$We prove knowledge of $\mathbf{s}_1$ and a shortness bound in the Euclidean norm of
$$ \|\mathbf{E}_s \mathbf{s}_1 + \mathbf{E}_m \mathbf{m} + \mathbf{v}\| \leq \mathcal{B} $$which is equivalent to proving knowledge of the binary polynomial $\vartheta_1$ such that
$$ \langle \text{pow}(\mathcal{B^2}), \vartheta_1 \rangle = \mathcal{B}^2 - \|\mathbf{E}_s \mathbf{s}_1 + \mathbf{E}_m \mathbf{m} + \mathbf{v}\|^2 \quad \text{over}\, \mathbb{Z} $$where $\mathbf{E}_s = {\begin{bmatrix}{\mathbf{I}_m} \\ {\mathbf{A}}\end{bmatrix}},\, \mathbf{E}_m = [\,],\, \mathbf{v} = -{\begin{bmatrix}{\mathbf{0}} \\ {\mathbf{u}}\end{bmatrix}}$.
def get_J(n, k):
"""Get x from <x>_(sigma_(-1)) structure of sigma_n1_struct_vec(x)"""
return IM(n).tensor_product(sg.matrix(R_q, [1] + [0 for _ in range(k-1)]))
def get_pow(B_square):
"""Get polynomial representing powers of 2 up to B_square"""
return R_q(sum([2**i * X_q**i for i in range(int(sg.log(B_square, 2)) + 1)]))
def N_2_binary_Rq(x):
"""Represent positive integer as binary polynomial compatible with get_pow()"""
bin_rep = [int(i) for i in bin(x)[2:]]
bin_rep.reverse()
return R_q(bin_rep)
def abdlop_mlwe(m1, m2, n, k, Z, n_is, lambd,
is_opti, rej_u_1, rej_u_2, rej_u_3, get_challenge_u,
std1, rep_M1, std2, rep_M2, std3, rep_M3, roh,
s1, s2, A1, A2, B_gamma, B_beta, B_ext, b_ext, theta, tA,
E_s_is, v_is, B_is):
"""
Prove knowledge of a Module-LWE Secret
"""
is_sigma_1 = False # always uses sigma_n1 automorphism
#================================ Prover =================================
n_ex = sum(n_is) + Z
s3 = stack_vec_Rql([E_s_is[i]*s1 + v_is[i] for i in range(Z)]
+ [theta])
y3_smplr = GaussianSampler(R_q, std3, 256//d)
y3 = y3_smplr.get()
beta3 = random.choice([-1, 1])
t_gamma = B_gamma*s2 + y3
t_beta = B_beta*s2 + sg.vector([beta3])
#=============================== Verifier ================================
R_ = rand_ZZ_mat_binomial(256, n_ex * d, 2)
#================================ Prover =================================
Rs3 = ZZl_2_Rql(R_ * Rql_2_ZZl(s3))
z3 = y3 + beta3 * Rs3
if rej_u_3(z3, Rs3, std3, rep_M3):
return "Rejected"
# Convenience matrices and vectors
U = get_U(2*(m1 + Z + 256//d + 1))
J = get_J(m1 + Z + 256//d + 1, 2)
e_is = [sg.vector(R_q, [R_q([1 if k + d*j == i else 0 for k in range(d)]) for j in range(256//d)])
for i in range(256)]
K_s = sg.block_matrix(R_q, [
[IM(m1), ZM(m1, Z), ZM(m1, 256//d + 1)]
]) * J
k_theta_is = [stack_vec_Rql([zv(m1 + i), [1], zv(Z - i - 1 + 256//d + 1)]) * J for i in range(Z)]
K_y3 = sg.block_matrix(R_q, [
[ZM(256//d, m1 + Z), IM(256//d), ZM(256//d, 1)]
]) * J
k_beta3 = (ZM(1, m1 + Z + 256//d).augment(sg.matrix(R_q, [1])) * J)[0]
# Quadratic relations over Rq
R2_is = []
r1_is = []
r0_is = []
N = 0
# Quadratic relations over Zq
R2_p_is = []
r1_p_is = []
r0_p_is = []
M = 0
# Extend R2
# 6.25, 6.33
R2_is.extend([k_beta3.column() * k_beta3.row()])
r1_is.extend([zv(2 * (m1 + Z + 256//d + 1))])
r0_is.extend([R_q(-1)])
N = N + 1
# Extend R2_p
# Exact Shortness Extensions
# 6.24 (prove approximate shortness)
block_PsEsi = sg.block_matrix(R_q, [
[sg.block_matrix(R_q, [[E_s_is[j], ZM(n_is[j], Z)] for j in range(Z)])],
[sg.block_matrix(R_q, [[ZM(Z, m1), IM(Z)]])]
])
block_Kskthetai = sg.block_matrix(R_q, [
[K_s],
[sg.block_matrix(R_q, [[k_theta_is[j].row()] for j in range(Z)])]
])
block_PsEsi_Kskthetai = block_PsEsi * block_Kskthetai
R2_p_is.extend([
k_beta3.column()
* sigma_n1_vec(ZZl_2_Rql(R_[i])).row()
* block_PsEsi_Kskthetai
for i in range(256)])
block_fvi = stack_vec_Rql([v_is[j] for j in range(Z)] + [sg.zero_vector(Z)])
r1_p_is.extend([
sigma_n1_vec(e_is[i]) * K_y3
+ (sigma_n1_vec(ZZl_2_Rql(R_[i])) * block_fvi * k_beta3)
for i in range(256)])
r0_p_is.extend([R_q(-dot_Rql(z3, e_is[i])) for i in range(256)])
M = M + 256
# 6.26 (prove beta3 in {-1, 1})
R2_p_is.extend([ZM(2*(m1 + Z + 256//d + 1), 2*(m1 + Z + 256//d + 1)) for i in range(1, d)])
r1_p_is.extend([X_q**i * k_beta3 for i in range(1, d)])
r0_p_is.extend([R_q(0) for i in range(1, d)])
M = M + (d - 1)
# Euclidean Norm Extensions
# 6.29 (exact shortness over Zq)
R2_p_is.extend([
U.transpose() * sigma_n1_mat(K_s).transpose()
* sigma_n1_mat(E_s_is[i]).transpose() * E_s_is[i]
* K_s
for i in range(Z)])
r1_p_is.extend([
v_is[i] * sigma_n1_mat(E_s_is[i]) * sigma_n1_mat(K_s) * U
+ sigma_n1_vec(v_is[i]) * E_s_is[i] * K_s
+ sigma_n1(get_pow(B_is[i]**2)) * k_theta_is[i]
for i in range(Z)])
r0_p_is.extend([R_q(dot_Rql(v_is[i], v_is[i]) - B_is[i]**2) for i in range(Z)])
M = M + Z
# Append (B_gamma, B_beta) to B, theta to s1, (y3, beta3) to m and (t_gamma, t_beta) to tB
B = B_gamma.stack(B_beta)
Bg = B_ext
b = b_ext
s1 = stack_vec_Rql([s1, theta])
m1 = m1 + Z
m = stack_vec_Rql([y3, [beta3]])
ell = 256//d + 1
tB = stack_vec_Rql([t_gamma, t_beta])
# Run quadratic poly sub-protocol
cond1 = abdlop_quadratic_poly(m1, m2, ell, n, k, N, M, lambd,
is_sigma_1, is_opti, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, Bg, b, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is)
if cond1 == "Rejected":
return "Rejected"
#=============================== Verifier ================================
cond2 = (float(norm_Rql(z3)) <= float(roh * std3 * sg.sqrt(256)))
return cond1 and cond2
Here, we provide example parameters to test the Module-LWE secret proof.
def test_abdlop_mlwe():
"""Call abdlop_mlwe with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
n = 9 # dimension of the Module-SIS problem
Z = 1 # number of equations for Euclidean norm
lambd = 4 # number of random masking polynomials (boosting soundness to q1^-lambda)
n_A = 8 # height of A matrix
m_A = m1 # width of A matrix (must be same size as m1)
# Algorithm parameters
is_opti = False # proof size optimized sigma_n1 version with reduced garbage commitments
if is_opti:
lambd = lambd//2 # reduced garbage commitments
k = k_sigma_n1 # degree of automorphism
eta = eta_challenge_v2 # challenge norm upper bound
get_challenge_u = get_challenge_v2 # challenge sampling algorithm
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
rej_u_3 = rej_bimodal
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * sg.sqrt(alpha_s1**2 + d) # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 1
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
# Public information
# ABDLOP commitment
A1 = rand_Rq_mat(n, m1 + Z)
A2 = rand_Rq_mat(n, m2)
B_gamma = rand_Rq_mat(256//d, m2)
B_beta = rand_Rq_mat(1, m2)
B_ext = rand_Rq_mat(lambd, m2)
b_ext = rand_Rql(m2)
# MLWE
A = rand_Rq_mat(n_A, m_A)
e = rand_Rql_bin(m_A)
u = A * s1 + e
# Shortness in the Euclidean norm
n_is = [m_A + m_A] # size of vectors for Euclidean norm equations
E_s_is = [IM(m_A).stack(A)]
v_is = [-stack_vec_Rql([zv(m_A), u])]
norms = [norm_Rql(E_s_is[0]*s1 + v_is[0])]
B_is = [sg.sqrt(2048)] # integer Euclidean norm bound (sqrt(2048) is upper bound of norm for nu_s1 = 1)
gamma3 = 6
std3 = gamma3 * sg.sqrt(337) * sg.sqrt(d + B_is[0]**2)
rep_M3 = rej_bimodal_M(gamma3)
global rep_rate_abdlop_mlwe
rep_rate_abdlop_mlwe = rep_M1 * 2*rep_M2 * rep_M3
roh = 1.64 # z3 norm bound (for kappa=128)
# ABDLOP tA
squared_norm_diffs = [B_is[0]**2 - round(norms[0]**2)] # squared norm is always an int (round float errors)
theta = sg.vector(R_q, [N_2_binary_Rq(squared_norm_diffs[0])])
tA = A1 * stack_vec_Rql([s1, theta]) + A2 * s2 # ABDLOP tA
return abdlop_mlwe(m1, m2, n, k, Z, n_is, lambd,
is_opti, rej_u_1, rej_u_2, rej_u_3, get_challenge_u,
std1, rep_M1, std2, rep_M2, std3, rep_M3, roh,
s1, s2, A1, A2, B_gamma, B_beta, B_ext, b_ext, theta, tA,
E_s_is, v_is, B_is)
# Run protocol with example parameters and repeat on rejection until acceptance
repeat_until_accept(test_abdlop_mlwe)
# This will take a few minutes!
# Measure repetition rate
#measure_rep_rate(test_abdlop_mlwe, rep_rate_abdlop_mlwe)
This is a general protocol for proving various lattice relations by combining all the techniques discussed in previous sections. It can for example be instantiated to provide proofs for verifiable encryption or group signature schemes.
In the code, we give references to the corresponding equations as described in [Ngu22] (6.20, 6.25, 6.33, 6.21, 6.24, 6.26, 6.27, 6.28, 6.29, 6.32, 6.34).
Private information:
$$ \mathbf{s}_{1} \in \mathcal{R}_{q}^{m_1},\, \mathbf{m} \in \mathcal{R}_{q}^{\ell},\, \mathbf{s}_{2} \in \mathcal{R}_{q}^{m_2},\, \mathbf{\vartheta} = (\vartheta_1, ..., \vartheta_Z) \in \{0, 1\}^{Zd} $$Public information:
$$ \mathbf{A}_{1} \in {\mathcal{R}}_{q}^{n \times m_{1}},\, \mathbf{A}_{2} \in {\mathcal{R}}_{q}^{n \times m_{2}},\, \mathbf{B} \in {\mathcal{R}}_{q}^{\ell \times m_{2}},\, \mathbf{b} \in {\mathcal{R}}_{q}^{m_{2}}, $$$$ \mathbf{B}_{\gamma} \in {\mathcal{R}}_{q}^{2*256/d \times m2},\, \mathbf{B}_{\beta} \in {\mathcal{R}}_{q}^{2 \times m2},\, \mathbf{B}_{\text{ext}} \in {\mathcal{R}}_{q}^{\lambda \times m2},\, \mathbf{b}_{\text{ext}} \in {\mathcal{R}}_{q}^{m2} $$$$ {\Bigg[}{\begin{array}{l}{\mathbf{t}_{A}} \\ {\mathbf{t}_{B}}\end{array}}{\Bigg]} = {\Bigg[}{\begin{array}{l}{\mathbf{A}_{1}} \\ {\mathbf{0}}\end{array}}{\Bigg]} \cdot {\begin{bmatrix}{\mathbf{s}_1} \\ {\mathbf{\vartheta}}\end{bmatrix}} + {\Bigg[}{\begin{array}{l}{\mathbf{A}_{2}} \\ {\mathbf{B}}\end{array}}{\Bigg]} \cdot \mathbf{s}_{2} + {\Bigg[}{\begin{array}{l}{\mathbf{0}} \\ {\mathbf{m}}\end{array}}{\Bigg]} $$We prove knowledge of the secret vectors $\mathbf{s_1} \in \mathcal{R_q^{m_1}}$ and $\mathbf{m} \in \mathcal{R_q^{\ell}}$ which satisfy all the following relations (with $\sigma = \sigma_{-1}$):
1.) Quadratic relations over $\mathcal{R}_q$ with automorphisms. For $i \in [N]$ and public
$$ {(\mathbf{R}_{i,2}, \mathbf{r}_{i,1}, r_{i,0}) \in \mathcal{R}_q^{2(m_1 + \ell) \times 2(m_1 + \ell)} \times \mathcal{R}_q^{2(m_1 + \ell)} \times \mathcal{R}_q}: $$$$ {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}^{T}} \mathbf{R}_{i,2} {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}} + \mathbf{r}_{i,1}^{T} {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}} + r_{i,0} = 0. $$2.) Quadratic relations over $\mathbb{Z}_q$ with automorphisms. For $i \in [M]$ and public
$$ (\mathbf{R}'_{i,2}, \mathbf{r}'_{i,1}, r'_{i,0}) \in \mathcal{R}_q^{2(m_1 + \ell) \times 2(m_1 + \ell)} \times \mathcal{R}_q^{2(m_1 + \ell)} \times \mathcal{R}_q: $$$$ \text{const. coeff. of}\; {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}^{T}} \mathbf{R}'_{i,2} {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}} + \mathbf{r}_{i,1}^{'T} {\langle \mathbf{s}_1 || \mathbf{m} \rangle_{\sigma}} + r'_{i,0}\; \text{equals}\, 0. $$3.) Shortness in the infinity norm. The following polynomial vector has binary coefficients for public
$$ {\mathbf{P}_{s} \in \mathcal{R}_q^{n_{\text{bin} \times m_1}},\, \mathbf{P}_{m} \in \mathcal{R}_q^{n_{\text{bin} \times \ell}}\,}, {\,\mathbf{f} \in \mathcal{R}_q^{n_{\text{bin}}}}: $$$$ \mathbf{P}_{s} \mathbf{s}_1 + \mathbf{P}_{m} \mathbf{m} + \mathbf{f} \in \{ 0,1 \}^{n_{\text{bin}}d} $$4.) Shortness in the Euclidean norm. For $i \in [Z]$, public
$$ \text{bound}\; \mathcal{B}_i,\, \mathbf{E}_s^{(i)} \in \mathcal{R}_q^{n_i \times m_1},\, \mathbf{E}_m^{(i)} \in \mathcal{R}_q^{n_i \times \ell},\, \mathbf{v}_s^{(i)} \in \mathcal{R}_q^{n_i}: $$$$ \|\mathbf{E}_s^{(i)} \mathbf{s}_1 + \mathbf{E}_m^{(i)} \mathbf{m} + \mathbf{v}^{(i)}\| \leq \mathcal{B}_i $$which is equivalent to proving knowledge of the binary polynomial $\vartheta_{i}$ such that
$$ \langle \text{pow}(\mathcal{B_i^2}), \vartheta_i \rangle = \mathcal{B}_i^2 - \|\mathbf{E}_s^{(i)} \mathbf{s}_1 + \mathbf{E}_m^{(i)} \mathbf{m} + \mathbf{v}^{(i)}\|^2 \quad \text{over}\, \mathbb{Z}. $$5.) Approximate Shortness. For a public bound $\mathcal{B_i}$
$$ \mathbf{D}_s \in \mathcal{R}_q^{n' \times m_1},\, \mathbf{D}_m \in \mathcal{R}_q^{n' \times \ell,\, \mathbf{u} \in \mathcal{R}_q^{n'}}: $$$$ \|\mathbf{D}_s\mathbf{s}_1 + \mathbf{D}_m \mathbf{m} + \mathbf{u}\| \leq \mathcal{B}_i $$however, we are fine with convincing the verifier that
$$ \|\mathbf{D}_s\mathbf{s}_1 + \mathbf{D}_m \mathbf{m} + \mathbf{u}\|_{\infty} \leq \psi \mathcal{B}_i $$where $\psi > 1$ is a public approximation factor.
def abdlop_toolbox(m1, m2, ell, n, k, N, M, n_bin, Z, n_is, n_p, lambd,
is_opti, rej_u_1, rej_u_2, rej_u_3, rej_u_4, get_challenge_u,
std1, rep_M1, std2, rep_M2, std3, rep_M3, std4, rep_M4, roh,
s1, s2, m, A1, A2, B, B_gamma, B_beta, B_ext, b_ext, theta, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is,
P_s, P_m, f,
E_s_is, E_m_is, v_is, B_is,
D_s, D_m, u, B_p):
"""
ABDLOP toolbox to prove various quadratic relations
- Quadratic relations over Rq
- Quadratic relations over Zq
- Shortness in the infinity norm
- Shortness in the Euclidean norm
- Approximate shortness
"""
is_sigma_1 = False # always uses sigma_n1 automorphism
#================================ Prover =================================
n_ex = n_bin + sum(n_is) + Z
s3 = stack_vec_Rql([P_s*s1 + P_m*m + f]
+ [E_s_is[i]*s1 + E_m_is[i]*m + v_is[i] for i in range(Z)]
+ [theta])
s4 = D_s*s1 + D_m*m + u
y3_smplr = GaussianSampler(R_q, std3, 256//d)
y3 = y3_smplr.get()
y4_smplr = GaussianSampler(R_q, std4, 256//d)
y4 = y4_smplr.get()
beta3 = random.choice([-1, 1])
beta4 = random.choice([-1, 1])
t_gamma = B_gamma*s2 + stack_vec_Rql([y3, y4])
t_beta = B_beta*s2 + sg.vector([beta3, beta4])
#=============================== Verifier ================================
R_ = rand_ZZ_mat_binomial(256, n_ex * d, 2)
R_p = rand_ZZ_mat_binomial(256, n_p * d, 2)
#================================ Prover =================================
Rs3 = ZZl_2_Rql(R_ * Rql_2_ZZl(s3))
z3 = y3 + beta3 * Rs3
if rej_u_3(z3, Rs3, std3, rep_M3):
return "Rejected"
Rps4 = ZZl_2_Rql(R_p * Rql_2_ZZl(s4))
z4 = y4 + beta4 * Rps4
if rej_u_4(z4, Rps4, std4, rep_M4):
return "Rejected"
# Convenience matrices and vectors
U = get_U(2*(m1 + Z + ell + 512//d + 2))
J = get_J(m1 + Z + ell + 512//d + 2, 2)
e_is = [sg.vector(R_q, [R_q([1 if k + d*j == i else 0 for k in range(d)]) for j in range(256//d)])
for i in range(256)]
K_s = sg.block_matrix(R_q, [
[IM(m1), ZM(m1, Z), ZM(m1, ell), ZM(m1, 512//d + 2)],
[ZM(ell, m1), ZM(ell, Z), IM(ell), ZM(ell, 512//d + 2)]
]) * J
K_s_sigma = sg.block_matrix(R_q, [
[IM(2*m1), ZM(2*m1, 2*Z), ZM(2*m1, 2*ell),
ZM(2*m1, 2*(512//d + 2))],
[ZM(2*ell, 2*m1), ZM(2*ell, 2*Z), IM(2*ell),
ZM(2*ell, 2*(512//d + 2))]
])
k_theta_is = [stack_vec_Rql([zv(m1 + i), [1], zv(Z - i - 1 + ell + 512//d + 2)]) * J
for i in range(Z)]
K_y3_K_y4 = sg.block_matrix(R_q, [
[ZM(256//d, m1 + Z + ell), IM(256//d), ZM(256//d, 256//d),
ZM(256//d, 2)],
[ZM(256//d, m1 + Z + ell), ZM(256//d, 256//d), IM(256//d),
ZM(256//d, 2)]
]) * J
K_y3 = K_y3_K_y4[:256//d]
K_y4 = K_y3_K_y4[256//d:]
k_beta3_k_beta4 = sg.block_matrix(R_q, [
[ZM(1, m1 + Z + ell + 512//d), sg.matrix(R_q, [1, 0])],
[ZM(1, m1 + Z + ell + 512//d), sg.matrix(R_q, [0, 1])]
]) * J
k_beta3 = k_beta3_k_beta4[0]
k_beta4 = k_beta3_k_beta4[1]
# Extend R2
# 6.20
R2_is = [K_s_sigma.transpose() * R2_is[i] * K_s_sigma for i in range(N)]
r1_is = [r1_is[i] * K_s_sigma for i in range(N)]
# 6.25, 6.33
R2_is.extend([k_beta3.column() * k_beta3.row(), k_beta4.column() * k_beta4.row()])
r1_is.extend([zv(2 * (m1 + Z + ell + 512//d + 2)), zv(2 * (m1 + Z + ell + 512//d + 2))])
r0_is.extend([R_q(-1), R_q(-1)])
N = N + 2
# Extend R2_p
# 6.21
R2_p_is = [K_s_sigma.transpose() * R2_p_is[i] * K_s_sigma for i in range(M)]
r1_p_is = [r1_p_is[i] * K_s_sigma for i in range(M)]
# Exact Shortness Extensions
# 6.24
block_PsEsi = sg.block_matrix(R_q, [
[sg.block_matrix(R_q, [[P_s, P_m, ZM(n_bin, Z)]])],
[sg.block_matrix(R_q, [[E_s_is[j], E_m_is[j], ZM(n_is[j], Z)] for j in range(Z)])],
[sg.block_matrix(R_q, [[ZM(Z, m1), ZM(Z, ell), IM(Z)]])]
])
block_Kskthetai = sg.block_matrix(R_q, [
[K_s],
[sg.block_matrix(R_q, [[k_theta_is[j].row()] for j in range(Z)])]
])
block_PsEsi_Kskthetai = block_PsEsi * block_Kskthetai
R2_p_is.extend([
k_beta3.column()
* sigma_n1_vec(ZZl_2_Rql(R_[i])).row()
* block_PsEsi_Kskthetai
for i in range(256)])
block_fvi = stack_vec_Rql([f] + [v_is[j] for j in range(Z)] + [sg.zero_vector(Z)])
r1_p_is.extend([
sigma_n1_vec(e_is[i]) * K_y3
+ (sigma_n1_vec(ZZl_2_Rql(R_[i])) * block_fvi * k_beta3)
for i in range(256)])
r0_p_is.extend([R_q(-dot_Rql(z3, e_is[i])) for i in range(256)])
M = M + 256
# 6.26
R2_p_is.extend([ZM(2*(m1 + Z + ell + 512//d + 2), 2*(m1 + Z + ell + 512//d + 2)) for i in range(1, d)])
r1_p_is.extend([X_q**i * k_beta3 for i in range(1, d)])
r0_p_is.extend([R_q(0) for i in range(1, d)])
M = M + (d - 1)
# Infinity Norm Extensions
x = sg.vector(R_q, [R_q([1 for _ in range(d)]) for i in range(n_bin)])
# 6.27
R2_p_is.extend([
U.transpose() * sigma_n1_mat(K_s).transpose()
* sg.block_matrix(R_q, [
[sigma_n1_mat(P_s).transpose() * P_s, sigma_n1_mat(P_s).transpose() * P_m],
[sigma_n1_mat(P_m).transpose() * P_s, sigma_n1_mat(P_m).transpose() * P_m]
])
* K_s
])
r1_p_is.extend([
(f - x) * sg.block_matrix([[sigma_n1_mat(P_s), sigma_n1_mat(P_m)]]) * sigma_n1_mat(K_s) * U
+ sigma_n1_vec(f) * sg.block_matrix([[P_s, P_m]]) * K_s
])
r0_p_is.extend([R_q(dot_Rql(f, f - x))])
M = M + 1
# 6.28
R2_p_is.extend([U.transpose() * sigma_n1_vec(k_theta_is[i]).column() * k_theta_is[i].row() for i in range(Z)])
r1_p_is.extend([R_q([-1 for _ in range(d)]) * sigma_n1_vec(k_theta_is[i]) * U for i in range(Z)])
r0_p_is.extend([R_q(0) for i in range(Z)])
M = M + Z
# Euclidean Norm Extensions
# 6.29
R2_p_is.extend([
U.transpose() * sigma_n1_mat(K_s).transpose()
* sg.block_matrix(R_q, [
[sigma_n1_mat(E_s_is[i]).transpose() * E_s_is[i], sigma_n1_mat(E_s_is[i]).transpose() * E_m_is[i]],
[sigma_n1_mat(E_m_is[i]).transpose() * E_s_is[i], sigma_n1_mat(E_m_is[i]).transpose() * E_m_is[i]]
])
* K_s
for i in range(Z)])
r1_p_is.extend([
v_is[i] * sg.block_matrix(R_q, [[sigma_n1_mat(E_s_is[i]), sigma_n1_mat(E_m_is[i])]]) * sigma_n1_mat(K_s) * U
+ sigma_n1_vec(v_is[i]) * sg.block_matrix(R_q, [[E_s_is[i], E_m_is[i]]]) * K_s
+ sigma_n1(get_pow(B_is[i]**2)) * k_theta_is[i]
for i in range(Z)])
r0_p_is.extend([R_q(dot_Rql(v_is[i], v_is[i]) - B_is[i]**2) for i in range(Z)])
M = M + Z
# Approximate Shortness
# 6.32
R2_p_is.extend([k_beta4.column() * sigma_n1_vec(ZZl_2_Rql(R_p[i])).row() * sg.block_matrix(R_q, [[D_s, D_m]]) * K_s
for i in range(256)])
r1_p_is.extend([sigma_n1_vec(e_is[i]) * K_y4 + sigma_n1_vec(ZZl_2_Rql(R_p[i])) * u * k_beta4 for i in range(256)])
r0_p_is.extend([R_q(-dot_Rql(z4, e_is[i])) for i in range(256)])
M = M + 256
# 6.34
R2_p_is.extend([ZM(2*(m1 + Z + ell + 512//d + 2), 2*(m1 + Z + ell + 512//d + 2)) for i in range(1, d)])
r1_p_is.extend([X_q**i * k_beta4 for i in range(1, d)])
r0_p_is.extend([R_q(0) for i in range(1, d)])
M = M + (d - 1)
# Append (B_gamma, B_beta) to B, theta to s1, (y3, y4, beta3, beta4) to m and (t_gamma, t_beta) to tB
B = B.stack(B_gamma).stack(B_beta)
Bg = B_ext
b = b_ext
s1 = stack_vec_Rql([s1, theta])
m1 = m1 + Z
m = stack_vec_Rql([m, y3, y4, [beta3], [beta4]])
ell = ell + 256//d + 256//d + 1 + 1
tB = stack_vec_Rql([tB, t_gamma, t_beta])
# Run quadratic poly sub-protocol
cond1 = abdlop_quadratic_poly(m1, m2, ell, n, k, N, M, lambd,
is_sigma_1, is_opti, rej_u_1, rej_u_2, get_challenge_u,
std1, rep_M1, std2, rep_M2,
s1, s2, m, A1, A2, B, Bg, b, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is)
if cond1 == "Rejected":
return "Rejected"
#=============================== Verifier ================================
cond2 = (float(norm_Rql(z3)) <= float(roh * std3 * sg.sqrt(256)))
cond3 = (float(inf_norm_Rql(z4)) <= float(sg.sqrt(2 * kappa) * std4))
return cond1 and cond2 and cond3
Here, we provide example parameters to test toolbox proofs.
def test_abdlop_toolbox():
"""Call abdlop_toolbox with example parameters"""
# Size parameters
m1 = 8 # size of s1
m2 = 25 # size of s2
ell = 2 # size of m
n = 9 # dimension of the Module-SIS problem
N = 2 # number of quadratic equations
M = 3 # number of quadratic polynomials with constant coefficient zero
n_bin = 3 # size of infinity norm vector
Z = 2 # number of equations for Euclidean norm
n_is = random.choices(range(1, 4), k=Z) # size of vectors for Euclidean norm equations
n_p = 3 # size of approximate shortness vector
lambd = 4 # number of random masking polynomials (boosting soundness to q1^-lambda)
# Algorithm parameters
is_opti = False # proof size optimized sigma_n1 version with reduced garbage commitments
if is_opti:
lambd = lambd//2 # reduced garbage commitments
k = k_sigma_n1 # degree of automorphism
eta = eta_challenge_v2 # challenge norm upper bound
get_challenge_u = get_challenge_v2 # challenge sampling algorithm
rej_u_1 = rej1 # rejection sampling algorithm
rej_u_2 = rej2
rej_u_3 = rej_bimodal
rej_u_4 = rej_bimodal
gamma1 = 19 # repetition rate parameter
nu_s1 = 1 # s1 max coefficient
alpha_s1 = norm_Rql_bound(m1, nu_s1) # s1 norm upper bound
std1 = gamma1 * eta * sg.sqrt(alpha_s1**2 + d) # standard deviation
rep_M1 = rej1_M(gamma1) # repetition rate
gamma2 = 2
nu_s2 = 1
alpha_s2 = norm_Rql_bound(m2, nu_s2)
std2 = gamma2 * eta * alpha_s2
rep_M2 = rej2_M(gamma2)
# Private information
s1 = rand_Rql_small(m1, nu_s1) # ABDLOP Ajtai part message vector
s2 = rand_Rql_small(m2, nu_s2) # ABDLOP randomness vector
m = rand_Rql(ell) # ABDLOP BDLOP part message vector
s = sigma_n1_struct_vec(stack_vec_Rql([s1, m]))
# Public information
# ABDLOP commitment
A1 = rand_Rq_mat(n, m1 + Z)
A2 = rand_Rq_mat(n, m2)
B = rand_Rq_mat(ell, m2)
B_gamma = rand_Rq_mat(512//d, m2)
B_beta = rand_Rq_mat(2, m2)
B_ext = rand_Rq_mat(lambd, m2)
b_ext = rand_Rql(m2)
# Quadratic relations over Rq
R2_is = [rand_Rq_mat(k * (m1 + ell), k * (m1 + ell)) for i in range(N)]
r1_is = [rand_Rql(k * (m1 + ell)) for i in range(N)]
r0_is = [-(s.dot_product(R2_is[i] * s) + r1_is[i].dot_product(s)) for i in range(N)]
# Quadratic relations over Zq
R2_p_is = [rand_Rq_mat_first_zero(k * (m1 + ell), k * (m1 + ell)) for i in range(M)]
r1_p_is = [rand_Rql_first_zero(k * (m1 + ell)) for i in range(M)]
r0_p_is = [-(s.dot_product(R2_p_is[i] * s) + r1_p_is[i].dot_product(s)) for i in range(M)]
# Shortness in the infinity norm
P_s = rand_Rq_mat(n_bin, m1)
P_m = rand_Rq_mat(n_bin, ell)
f = -(P_s*s1 + P_m*m) + rand_Rql_bin(n_bin) # P_s*s1 + P_m*m + f is polynomial vector with binary coefficients
# Shortness in the Euclidean norm
E_s_is = [rand_Rq_mat(n_is[i], m1) for i in range(Z)]
E_m_is = [rand_Rq_mat(n_is[i], ell) for i in range(Z)]
v_is = [-(E_s_is[i]*s1 + E_m_is[i]*m) + rand_Rql_bin(n_is[i]) for i in range(Z)]
norms = [norm_Rql(E_s_is[i]*s1 + E_m_is[i]*m + v_is[i]) for i in range(Z)]
B_is = [int(norms[i] + 2) for i in range(Z)] # integer Euclidean norm bounds, all B_is should be < sqrt(q)
gamma3 = 6
std3 = gamma3 * sg.sqrt(337) * sg.sqrt(d + B_is[0]**2)
rep_M3 = rej_bimodal_M(gamma3)
roh = 1.64 # z3 norm bound (for kappa=128)
# Approximate Shortness
D_s = rand_Rq_mat(n_p, m1)
D_m = rand_Rq_mat(n_p, ell)
u = rand_Rql(n_p)
B_p = norm_Rql(D_s*s1 + D_m*m + u)
gamma4 = 6
std4 = gamma4 * sg.sqrt(337) * B_p
rep_M4 = rej_bimodal_M(gamma4)
global rep_rate_abdlop_toolbox
rep_rate_abdlop_toolbox = rep_M1 * 2*rep_M2 * rep_M3 * rep_M4
# ABDLOP tA, tB
# squared norm is always an int (round float errors)
squared_norm_diffs = [B_is[i]**2 - round(norms[i]**2) for i in range(Z)]
theta = sg.vector(R_q, [N_2_binary_Rq(squared_norm_diffs[i]) for i in range(Z)])
tA_tB = (A1.stack(ZM(ell, m1 + Z)) * stack_vec_Rql([s1, theta])
+ A2.stack(B) * s2
+ stack_vec_Rql([zv(n), m]))
tA = tA_tB[:n] # ABDLOP tA
tB = tA_tB[n:] # ABDLOP tB
return abdlop_toolbox(m1, m2, ell, n, k, N, M, n_bin, Z, n_is, n_p, lambd,
is_opti, rej_u_1, rej_u_2, rej_u_3, rej_u_4, get_challenge_u,
std1, rep_M1, std2, rep_M2, std3, rep_M3, std4, rep_M4, roh,
s1, s2, m, A1, A2, B, B_gamma, B_beta, B_ext, b_ext, theta, tA, tB,
R2_is, r1_is, r0_is,
R2_p_is, r1_p_is, r0_p_is,
P_s, P_m, f,
E_s_is, E_m_is, v_is, B_is,
D_s, D_m, u, B_p)
# This will take a few minutes!
# Run protocol with example parameters and repeat on rejection until acceptance
#repeat_until_accept(test_abdlop_toolbox)
# This will take a few minutes!
# Measure repetition rate
#measure_rep_rate(test_abdlop_toolbox, rep_rate_abdlop_toolbox, runs=10)
This is the code for benchmarking the implementation.
def benchmark_protocol(func, runs):
"""Measure runtime of a protocol, only counting successful runs"""
i = 0
total_time = 0
while i < runs:
start_time = time.time()
res = func()
end_time = time.time()
if res != "Rejected":
i += 1
total_time += end_time - start_time
return total_time / runs
# This will take a few minutes!
#time_commit = benchmark_protocol(test_abdlop_commit, 100)
#print(f"Prove ABDLOP opening and 1 lin. rel. over Rq: {time_commit:.3f}s")
#time_linear = benchmark_protocol(test_abdlop_linear, 100)
#print(f"Prove ABDLOP opening, 1 lin. rel. over Rq and 1 lin. rel. over Zq: {time_linear:.3f}s")
#time_quadratic = benchmark_protocol(test_abdlop_single_quadratic, 100)
#print(f"Prove ABDLOP opening and 1 quad. rel. over Rq: {time_quadratic:.3f}s")
#time_quadratic_poly = benchmark_protocol(test_abdlop_quadratic_poly, 100)
#print(f"Prove ABDLOP opening, 1 quad. rel. over Rq and 1 quad. rel. over Zq: {time_quadratic_poly:.3f}s")
#time_mlwe = benchmark_protocol(test_abdlop_mlwe, 10)
#print(f"Prove Module-LWE secret: {time_mlwe:.3f}s")
We measure the runtime of our implemented protocols. Remember that rejection sampling forces us to restart a protocol with a certain probability, increasing the runtime. However, the most computationally expensive parts of the protocols are independent of the randomly sampled values, so they would not have to be repeated on rejection. For example, in the Module-LWE secret proof, computing the $\mathbf{R'}_{i,2}$ matrices is very expensive. Since we focus on readability in our implementation and restructuring the code for efficiency would make it harder to read, we simply repeat the entire protocol on rejection. For the benchmarks, we measure the expected time of a successful run without repetitions, giving us a more realistic picture of the protocol runtimes.
The following measurements were made on a consumer laptop with a 4.1GHz AMD Ryzen 7 PRO 4750U running Linux 6.6, Python 3.11 and SageMath 10.2 using the configurations given in the code above, which give us 128 bits of security.
$$ \begin{array}{l|r|r} \text{Proof Type} & \text{Runtime} & \text{Proof Size} \\ \hline \text{opening + 1 lin. rel. over Rq} & \text{390 ms} & \text{21.5 KB} \\ \text{opening + 1 lin. rel. over Rq + 1 lin. rel. over Zq} & \text{620 ms} & \text{23 KB} \\ \text{opening + 1 quad. rel. over Rq} & \text{510 ms} & \text{22 KB} \\ \text{opening + 1 quad. rel. over Rq + 1 quad. rel. over Zq} & \text{830 ms} & \text{24 KB} \\ \text{Module-LWE secret} & \text{35 s} & \text{29 KB} \\ \end{array} $$Proving knowledge of a simple ABLDOP opening together with one linear relation over Rq takes around 400ms using our unoptimized Python/Sage implementation. Moving to relations over Zq and to the quadratic variants does not increase the proof time significantly. However, there is a big jump when proving knowledge of a Module-LWE secret. This is because in this proof we need to prove relations over $\mathbb{Z}$, which we do using the approximate range proof technique. This technique requires us to construct around 300 large commitment matrices (see "Exact Shortness Extensions" in the code), which is a major performance bottleneck of Lantern. Speeding up matrix multiplication of polynomial matrices is therefore key to the performance of the scheme, similar to other lattice-based schemes.
Esgin et al. [ENS20], who proposed the previously most efficient lattice-based scheme report a runtime (prover + verifier) of around 4ms. Their implementation is heavily optimized for Intel CPUs using AVX2 instructions and they used many of the implementation techniques that have been used to speed up the popular Kyber [BDK+18], Dilithium [DKL+18], and NTTRU [LS19] lattice-based schemes. For fast polynomial multiplication and fast computation of automorphisms, they use the number theoretic transform (NTT) and heavily vectorized computations. This shows the potential for an optimized Lantern implementation since many of these techniques can also be applied to speed up our implementation. However there are some difficulties with Lantern: First, the modulus is not NTT-friendly, so standard fast algorithms for polynomial multiplication cannot be applied. Recent work by Chung et al. [Chu+21] which provides fast polynomial multiplication for NTT-unfriendly rings could be used instead. Secondly, sampling from a Gaussian distribution is difficult to implement efficiently and securely, i.e., hard to protect against side-channel attacks [HPR+20].
For the hash-based schemes, a Module-LWE proof implementation was done by Boschini et. al [Bos+20], who encoded the required relations in R1CS and used the libiop
C++ implementation of the Aurora [Ben+19] proof system to construct the proof of 72kB size. They needed roughly 40s on a consumer laptop, a similar runtime to our unoptimized implementation.
We have seen how to construct various efficient lattice-based ZKPs using the techniques of Lantern [LNP22]. This allows us to construct the currently shortest known Module-LWE secret proof with a proof size of 13kB. We have shown that this proof and its various sub-proofs can be constructed on a consumer laptop. Our unoptimized Python/Sage implementation does this in 35s. The major performance bottleneck of our implementation turns out to be the large polynomial matrix multiplications required to construct the commitment matrices. The authors of the previously shortest proof scheme [ENS20] applied many optimization techniques for lattice computation like AVX2 vectorization and fast polynomial multiplication to their scheme to bring their runtime down to 4ms. Many of these techniques can also be applied to Lantern, which shows the practical potential of the Lantern scheme.
We also provide a toolbox implementation, which can be used to construct proofs for various applications like verifiable encryption or group signatures [Ngu22]. These proofs should also be very efficient thanks to the techniques of Lantern.
However, there remain some open problems with Lantern. First, the Gaussian sampling required by Lantern is difficult to implement in such a way that it is efficient and protected against side-channel attacks [HPR+20]. Secondly, Lantern proofs are unfortunately not succinct, meaning they grow linearly in the number of committed messages. This means Lantern is not suited for proving large statements like circuit satisfiability. The hash-based alternatives for proving lattice statements are asymptotically succinct but unpractical because they are too slow and require unfeasible amounts of memory for more advanced proofs [Bos+20]. Succinct and concretely efficient lattice proof systems therefore remain an open problem.