PseudoRandom Number Generators
We want to be able to take a few "true random bits" (seed) and generate more "random looking bits", i.e. construct a function \(G:\{0,1\}^t\rightarrow\{0,1\}^T, T \gg t\). The generated bit strings should "look random" to an adversary. In cryptography, PRNG’s are used to construct session keys and stream ciphers.
True Randomness is generated from some source such as thermal noise. Abstractly, a random source defines a distribution on \(\{0,1\}^n\).
Example: \(n\)way independent bits \(b_1 ,..., b_n\) and \(Pr[b_i=1]=p\), \(Pr[b_i=0]=1p\).
Example: \(b_1 ,..., b_n\) uniform on \(\{0,1\}^n\)
Extractors convert a random source with unknown distribution on \(\{0,1\}^n\) into a uniform distribution on \(\{0,1\}^m\), \(m \le n\), e.g. Von Neumann extractor for binomial distribution. We shall take an \(n\)bit random source uniform on \(\{0,1\}^n\) for granted.
Definition
The traditional definition of pseudorandom number generators involves a bunch of statistical tests (see Knuth’s "The Art of Computer Programming Vol. 2"), but this is insufficient for cryptographic purposes.
For example, a linear congruential generator: pick a prime \(p\), pick \(a,b \in \mathbb{Z}_p\), pick random seed \(s \in \mathbb{Z}_p\), and iterate with \(s \leftarrow a s + b mod p\) and output \(LSB(s)\) is considered a PRNG under the traditional definition, but is completely predictable, because given \(\lg p\) bits one can recover the seed efficiently.
Instead, we require the pseudorandom number generator to fool any statistical test.
Definition (fixed security parameter version): A \((t, \epsilon)\)PRNG is a function \(G:{0,1}^n \rightarrow {0,1}^m (m \gg n)\) such that

\(G\) is efficiently computable by a deterministic algorithm.

\(G\) "\(\epsilon\)fools" all \(t\)time statistical tests, that is, for all \(t\)time algorithms \(A\)
We say that the uniform distribution on \({0,1}^m\) is \((t,\epsilon)\)indistinguishable from distributions \(\{G(S)S\leftarrow\{0,1\}^n\}\).
The NextBit Test
We derive an equivalent characterization of PRNG’s to that of Yao that is easier to work with. For \(R \in \{0,1\}^m\), denote the \(i\)th bit of \(R\) by \(R_i\) and the first \(i\) bits of \(R\) by \(R_{1,...,i}\).
We say \(G:\{0,1\}^n\rightarrow\{0,1\}^m\) \((t,\epsilon)\) passes the next bit test if for all \(0\le i \lt m\) no \(t\)time algorithm can predict \(G(S)_{i+1}\) from \(G(S)_{1,...,i}\), that is, for all \(t\)time algorithms \(M\) and for all \(0\le i \lt m\)
Theorem: [Yao '82]
Suppose \(G:\{0,1\}^n \rightarrow \{0,1\}^m\) \((t/m,\epsilon)\) passes the next bit test. Then \(G\) is a \((t,\epsilon)\)PRNG. ("The next bit test is universal.")
Proof: Suppose we have a \((t,\epsilon)\)algorithm \(A\) for \(G\). Then we shall build a \((t,\epsilon/m)\)next bit predictor for \(G\), by using a "hybrid argument":
Define the following collection of distributions \(\mathcal{P}_0 ,..., \mathcal{P}_m\) as follows: the first \(i\) bits of a member of \(\mathcal{P}_i\) are found by picking a random \(S \leftarrow \{0,1\}^n\) and setting them to \(G(S)_{1,...,i}\). The last \(mi\) bits are generated at random from \(\{0,1\}^{mi}\). Then
and
Hence
Thus there exists \(0\le i\lt m\) with
We can use \(A\) to build a next bit predictor for \(R_{i+1}\).
The "hybrid argument" is quite useful: in general, suppose experiment (game) \(E_0\) is distinguishable from experiment \(E_m\). Then build experiments \(E_1 ,..., E_{m1}\), and then it can be shown that there exists \(0\le i\lt m\) such that \(E_i\) is distinguishable from \(E_{i+1}\), which can be used to solve a challenge problem.
Hardcore bits can be used to construct PRNGs using a method due to Blum and Micali.