Lattices and quantum complexity October 5, 2016


A lot of cryptography has been developed based on assumptions that some problem is computationally hard. For lattice-based cryptography, the hardness assumption is on that some problem on lattices—usually the problem is finding the shortest vector in the lattice (SVP).

There’s an obvious caveat in every computational assumption—what if it’s wrong?—, but often there are good reasons to believe it, at least compared to other assumptions. Under this lens, lattices look pretty good: there haven’t been any decent advances in classical algorithms for SVP in decades, whereas one cannot say the same about factoring or discrete log. (Of course, that might just be because fewer people care about lattices now.) Perhaps more importantly, nobody has succeeded at using quantum computers to improve the situation, compared to a complete break for factoring and discrete log.

Why should anyone expect quantum computers to help? As usual in complexity, there’s a lot of conjectures everywhere, but some are more plausible than others, and we can use them to guide us. The first thing is that approximating SVP (also known as GapSVP) within a polynomial factor is known not to be NP-hard, unless NP=coNP. Hence it is a candidate for being in the strange class of problems that are neither in P nor are NP-complete. Other interesting candidates are graph isomorphism as well as our friends factoring and discrete log.

Now if we combine the widely-believed conjecture that NP ≠ coNP with the also widely-believed conjecture that NP ⊄ BQP, then there’s some hope that we can find a quantum algorithm that solves GapSVP. (If those conjectures are false, then an algorithm would be a huge breakthrough in complexity theory, which is considered unlikely).

The hidden subgroup problem (HSP)

Another point is that SVP is connected to the hidden subgroup problem, which has been the source of a few interesting positive results in quantum complexity theory.

Let \(G\) be a finite group and \(H \leq G\). Assume we have oracle access to a function \(f \colon G \to \{0,1\}^* \) which is promised to be constant on each coset of \(H\) (we say that \(f\) *hides* \(H\)). HSP asks us to find \(H\).

HSP is perhaps better seen as a framework. Some problems for which quantum computers seem to help can be seen as instances of it. For example, consider Simon’s problem, where we’re given oracle access to a function \(f \colon \{0,1\}^n \to \{0,1\}^n\) which we are promised to be either 1-to-1 or 2-to-1, and we have to find out which is the case. Note that this is HSP over \(\mathbb{Z}_2^n\).

It has been shown that if you only have a classical computer, you need an exponential number of queries to \(f\), whereas with a quantum computer, you only need a linear number of queries, plus a polynomial-time computation. In other words, Simon’s problem is a case of a separation between classical and quantum query complexity.

More famously, Shor’s algorithm for factoring is very similar to the algorithm to solve Simon’s problem. This is because factoring can be seen to reduce to the problem of period finding, where we are given oracle access to \(f \colon \mathbb{Z} \to \mathbb{Z}\) and we have to find its period. Here we can also see the problem as finding a hidden subgroup, with the caveat that since \(\mathbb{Z}\) is not finite, the problem doesn’t quite fit in HSP. In any case, this is another case where quantum computers since to help, although this doesn’t prove a separation since we don’t know whether factoring is not in P.

As mentioned, the relevance of HSP for lattices is that GapSVP reduces to the dihedral coset problem, which is essentially HSP over the dihedral group \(D_n\). (I say “essentially” because the problem also asks that the solution be done by the “standard method” for HSP: namely, sampling cosets.) Another interesting reduction is from graph isomorphism to HSP over the symmetric group \(S_n\). As noted, graph isomorphism is unlikely to be NP-hard, so it might be that these specific instances of the HSP are not without reach, at least from the quantum point of view.

LWE and quantum reductions

Nevertheless, just like in the classical case, there hasn’t been a lot of progress in quantum algorithms for SVP. In light of that, some people have used the quantum hardness of the problem as an assumption. This enables a reduction from LWE to GapSVP.

The reduction, as shown by Regev, is roughhly as follows. Assume we have an oracle for the LWE problem on with error distribution \(\mathcal{N}(0, n)\). The first point is that it is sufficient to solve the discrete Gaussian sampling (DGS) problem: given a lattice \(L\) and \(r > 0\), sample a point from \(D_{L,r}\), the Gaussian distribution on \(r\), where each point is distributed according to a Gaussian distribution with variance proportional to \(r\). Here, \(r\) is chosen large enough that \(D_{L,r}\) behaves somewhat like a continuous Gaussian distribution, but not too large.

The algorithm proceeds iteratively. We start with a set of \(\text{poly}(n)\) samples from \(D_{L,R}\), where \(R\) is large (say \(R = r 2^n\)), and end with a set of \(\text{poly}(n)\) samples from \(D_{L,r}\). Because \(R\) is very large, it is possible to sample from it efficiently.

First the algorithm uses its \(\text{poly}(n)\) samples to produce a sample from \(D_{L,R'}\), where \(R' < R/2\). This is done by using the LWE oracle and a quantum algorithm. Repeating this enough times, we get \(\text{poly}(n)\) samples from \(D_{L,R'}\). Now we can repeat this procedure \(n\) times to get \(\text{poly}(n)\) samples from \(D_{L,R/(2^n)} = D_{L,r}\).

In a bit more detail, each iteration step is as follows. Assume we have the samples from \(D_{L,R}\). We use the LWE oracle to solve a CVP instance in the dual lattice \(L^*\). Then, we prepare a quantum state representing the (classical) Fourier transform of the lattice. This can be done by using the CVP solution. Finally, we perform a quantum Fourier transform to get a state corresponding to \(D_{L,R'}\).

It is interesting that there’s only a small (although repeated many times) quantum step in the reduction, which yet seems difficult to remove. As Regev notes, giving an oracle for the CVP instance seems completely useless in the classical case: in order to properly use the oracle, we’d have to use information we already know. But with a quantum computer, the oracle enables us to perform a reversible computation on a quantum superposition, which in this case gives us the desired result.

It also makes me wonder whether one could get more interesting reductions by delving more deeply into the quantum world. Of course classical reductions are in general more convincing, but to me that just reflects the fact that we are a lot more uncertain about quantum computers. But quantum computers aren’t all-powerful, and it’d be interesting to see more stuff based on quantum hardness.