Factoring and Discrete Logarithms

The most obvious approach to breaking modern cryptosystems is to attack the underlying mathematical problem.

  1. Factoring: given \(N = pq, p \lt q, p \approx q\), find \(p, q\).

  2. Discrete logarithm: Given \(p, g, g^x \mod p\), find \(x\).

Classical Algorithms

  1. Brute force, e.g. trial division, which has running time \(O(p) = O(N^{1/2})\).

  2. Baby-step-giant-step, Pollard-Rho, Pollard kangaroo. All have running time \(O(p^{1/2}) = O(N^{1/4})\). (Also, these are the best known methods for solving discrete log on a general cyclic groups.)

Example: For factoring: it is known that using FFT, given \(f \in \mathbb{Z}_N [x]\) of degree \(d\), and given \(x_1, ...,x_d \in \mathbb{Z}_N\), computing \(f(x_1),...,f(x_d)\) can be done in time \(O(d \log d)\) and space \(O(d)\), which implies the existence of a simple \(O(N^{1/4})\) factoring algorithm.

Modern Algorithms

Define a function

\[L_{a,b}(N) = e^{b(\log N)^a (\log \log N)^{1-a}}\]

Then note that

\[ L_{0,b}(N) = (\log N)^b \]

which is polynomial in the number of bits in \(N\), and

\[ L_{1,b}(N) = N^b \]

which is exponential in the number of bits in \(N\). For values of \(a\) in between we get subexponential functions, i.e. functions that grow faster than polynomials but slower than exponentials.

Here is a list of some factoring algorithms and their running times. With the exception of Dixon’s algorithm, these running times are all obtained using heuristic arguments. We shall see that discrete logarithm algorithms for finite fields are similar.

  1. Dixon’s Algorithm: \(L_{1/2 , 2}(N) = e^{2 \sqrt{\log N \log \log N}}\)

  2. Continued Fractions: \(L_{1/2 , \sqrt{2}}(N) = e^{\sqrt{2} \sqrt{\log N \log \log N}}\)

  3. Quadratic Sieve: \(L_{1/2 , 1}(N) = e^{\sqrt{\log N \log \log N}}\). RSA-129 was solved using this method.

  4. Elliptic Curve: \(L_{1/2 , \sqrt{2}}(p) = L_{1/2, 1}(N)\). Unlike the other algorithms this one takes only polynomial space; the other algorithms have space bounds that are on par with their time bounds.

  5. Number Field Sieve ['88]: \(L_{1/3 , 1.902}(N) \approx e^{3 \sqrt{\log N}}\). RSA-512 was solved with this method.

The approach these algorithms take is to find random solutions to \(x^2 = y^2 \mod N\). Given such a solution, with probability \(1/2\), we have that \(\gcd(x-y,N)\) or \(\gcd(x+y,N)\) is a prime factor of \(N\).

Dixon’s Algorithm

The first part of the algorithm, known as the sieving step, finds many relations of a certain form. The second part, known as the linear algebra step, uses the relations to find a solution to \(x^2 = y^2 \mod N\).

  1. Pick a random \(x\in[1,N]\) and compute \(z=x^2 \mod N\)

  2. Test if \(z\) is \(S\)-smooth, for some smoothness bound \(S\), i.e. if all prime factors of \(z\) are less than \(S\). If so, then \(z = \prod_{i=1}^k l_i^{\alpha_i}\) where \(k\) is the number of primes less than \(S\), and record \(z\)

  3. Repeat until \(r\) relations are found, where \(r\) is a number like \(10 k\).

  4. We have \(r\) relations (modulo \(N\)), for example:

\[ \array{ x^2_1 &=& 2^2 3^4 5^1 ... l_k^0\\ x^2_2 &=& 2^0 3^1 5^3 ... l_k^1\\ &\vdots&\\ x^2_r &=& 2^0 3^2 5^0 ... l_k^2 } \]

We wish to find a subset of these relations such that the product of the right-hand sides is a square, that is, all the exponents are even: let \(A\) be a \(k \times r\) exponent matrix, where \(A_ij = \alpha_i\) in the \(j\)th relation. Then find a nonzero vector \(\bar{y}\in\mathbb{Z}^r_2\) such that \(A \cdot \bar{y} = \bar{0}\) modulo 2. Then \(\bar{y}\) describes a subset of relations that will multiply to give a perfect square on the right-hand side.

It remains to optimize \(S\). Define Dixon’s function as follows:

\[\psi(x,s)=|\{a\in{1,...,S}|a \text {is} S\text{-smooth}\}| \]

Then if use the heuristic that the proportion of \(S\)-smooth numbers amongst the possible values of \(z\) is the same as the proportion of \(S\)-smooth numbers amongst all numbers less than \(N\), then

\[\psi(x,s)/x = \Pr_{x\in\{1,...,N\}}[x \text{is} S\text{-smooth}] \approx u^{-u}\]

where \(u = x/s\), a result due to de Bruijn.

The sieving step is faster when \(S\) is larger, and the linear algebra step is faster when \(S\) is smaller, so \(S\) must be chosen carefully. It turns out the optimum value for \(S\) is

\[S = L_{1/2,2}(N)\]

which is also the algorithm’s running time. (In fact, because of the simplicity of Dixon’s algorithm, it is possible to derive these bounds non-heuristically.)

The matrix involved in the linear algebra step is sparse, and to speed up the algorithm, many specialized optimizations have been developed. Note also that it is easy to distribute the sieving step amongst many machines, and furthermore, verifying that the computed relations are correct is cheap (i.e. robustness is free unlike other distributed computation problems, e.g. SETI@home).

Quadratic Sieve

Define \(f_a(x) = (x+\lfloor \sqrt{a N} \rfloor ^2) - a N\). Then since \(|y - \lfloor\sqrt{y}\rfloor^2| \approx \sqrt{y}\), we have \(f_a(x) \approx x^2 + 2x\sqrt{a N} - \sqrt{a N}\). When \(|x| \lt \sqrt{N}\) we have \(f_a(x) \approx \sqrt{a N}\).

Then pick a small random \(a \leftarrow\{1,...,k\}\). Find all \(x\in[-B,B]\) (we shall describe how to do this later) such that \(f_a(x)\) is \(S\)-smooth, where \(S, B, k\) will be determined later. For such \(x\) we have a relation

\[ (x+\lfloor\sqrt{a N}\rfloor^2)=\prod_{i=1}^k l_i^{\alpha_i} \]

modulo \(N\), and as before with enough of these we can proceed to the linear algebra step.

Note that \(|f_a(x)|\lt\sqrt{a N}\) which means it is more probable that it is \(S\)-smooth than an integer on the order of \(N\) (which is what is required in Dixon’s algorithm).

To find all suitable \(x \in [-B,B]\): initialize an array of integers \(v\) indexed from \(-B\) to \(B\) with zero. For each small prime \(l_i\), increment \(v[x]\) if \(f_a(x) = 0 \mod l_i\). Doing this requires a simple linear scan: if \(\beta_1,\beta_2\) are the roots of \(f_a(x)\) in \(\mathbb{Z}_{l_i}\) then for every \(y\), we increment \(v[y]\) if \(y = \beta_1\) or \(y = \beta_2\) modulo \(l_i\).

With optimal \(B, S, k\), we have that the running time is \(L_{1/2,1}(N)\) if we use the heuristic that \(f_a(x)\) is uniformly distributed.

Number Field Sieve

In this method, sieving is done in number fields. Define \(d = (\log N / \log \log N)^{1/3}\), and let \(m = \lfloor N^{1/d}\rfloor\). Write \(N = m^d + f_{d-1}m^{d-1} + ... + f_0\), i.e. \(N\) in base \(m\), and define the polynomial \(f(x) = x^d + f_{d-1}x^{d-1} + ... + f_0\), so by construction \(f(m) = 0 (\mod N)\).

With overwhelming probability, \(f\) is irreducible, so define the field \(K = \mathbb{Q}[x]/f(x)\). Then find many pairs \((a,b)\) where \(0 \le a,b \le L_{1/3,0.901}(N)\) such that

  1. \(a-b m\) is \(L_{1/3,0.901}(N)\)-smooth.

  2. \(N_K(a-b x)\) is \(L_{1/3,0.901}(N)\)-smooth, where \(N_K\) is the norm on \(K\).

It turns out each pair yields a relation modulo \(N\) that can be used in the linear algebra step.

Discrete Logarithm

Suppose our input is \(y=g^\alpha \bmod p\). Then pick a smoothness bound \(S\), and proceed with index calculus:

  1. Pick random \(r, a \leftarrow \mathbb{Z}_p\) and set \(z = y^r g^a \bmod p\).

  2. Test if \(z\) is \(S\)-smooth. If so then

    \(y^r g^a = \prod_{i=1}^k l_i^{\alpha_i}\)

  3. Repeat until many (e.g. \(10k\)) relations are obtained.

  4. We have many relations of the form:

    \(r \log_g y + a = \sum_{i=1}^k a_i \log_g l_i \bmod p-1\)

    Use linear algebra to solve for \(\log_g y = \alpha\) and each \(\log_g l_i\).


Ben Lynn blynn@cs.stanford.edu 💡