Computing With Continued Fractions

Ultimately, regardless of the nature of the numbers involved, we must represent them with a finite number of bits. Once we leave the comfort of \(\mathbb{Z}\), we have several options:

  • Fixed-point arithmetic: Plain integers, only scaled.

  • Rationals: Well-suited for some domains, and for a cost, our results are always exact. Sadly insufficient for many problems: even \(\sqrt{2}\) cannot be represented.

  • Floating-point arithmetic: A common choice. Easy to understand, but error analysis is annoying. Also enslaved by base 2: even 1/3 cannot be represented exactly.

  • In some applications we can, for example, represent \(a + b\pi\) as \((a, b)\), which is exact and may suit the problem at hand. But let’s assume at some stage we need to mix oil and water.

How about continued fraction representation? Some features:

  • Easy error analysis.

  • All rationals exactly represented.

  • With a little state, we can iteratively generate more precision. For example, \(\sqrt{2} = [1;2,...]\) outputs 1 the first time and then 2 every subsequent time we ask for more precision. With floating-point, we would have to redo the entire computation at higher precision to find the next significant bit of \(\sqrt{2}\).

  • For a wide variety of rational, algebraic and transcendental numbers, this is as close as we can get to an exact representation. With the above scheme, at all times, we have given results to some precision and have stored enough state to arbitrarily increase the precision of our results without “redoing from start”.

  • Thread-friendly: operations are like Unix pipes that only need some of the input terms to generate the next output term.

Gosper also describes a related option: continued logarithms, which I may describe later.

Algorithms for Continued Fractions

Representing numbers is half the problem. We must also devise efficient algorithms for computing on them.

Inverting a continued fraction is trivial: if \(a_1 = 0\), then remove this term, otherwise insert \(0\) at the beginning of the sequence.

Comparison of two continued fractions \([a_1; a_2, ...]\) and \([b_1; b_2, ...]\) is performed by comparing \(a_i , b_i\) where \(i\) is the smallest index with \(a_i \ne b_i\). Flip the result if \(i\) is even. If one sequence ends before the other, then treat the missing term as \(\infty\).

Basic binary operations on continued fractions are tougher, but we find a way. For now, let us restrict ourselves to operations between a continued fraction and an integer, such as multiplication or addition by an integer. These are special cases of:

Homographic Functions

A homographic function, or Möbius transformation, is a function of the form

\[ f(x) = \frac{a x + b}{c x + d} \]

We’ve seen the “magic table” method for computing convergents. With a small tweak, we can compute \(f(\alpha)\) for a homographic function with integer coefficients and any \(\alpha = [a_1;a_2,...]\).

Normally we set \(p_{-1} = 0, p_0 = 1\). If we start instead with \(b, a\), induction shows instead of \(p_i\), the recurrence gives us \(a p_i + b q_i\). Thus if we initialize the two rows of the table with \(b,a\) and \(d,c\) rather than \(0,1\) and \(1,0\), we can compute the sequence of approximations \( \frac{a (p_i / q_i) + b}{c (p_i /q_i) +d} \) to \( \frac{a \alpha + b }{c \alpha + d}\), where \(\alpha= [a_1;a_2,...,]\).

For example, suppose we wish to compute \(\frac{2\sqrt{2}+3}{5\sqrt{2}+1}\). Then we start our table as follows:

\[ \begin{matrix} & & 1 & 2 & 2 & 2 & ... \\ 3 & 2 & \\ 1 & 5 & \end{matrix} \]

and compute as usual:

\[ \begin{matrix} & & 1 & 2 & 2 & ... \\ 3 & 2 & 5 & 12 & 29 & ... \\ 1 & 5 & 6 & 17 & 40 & ... \end{matrix} \]

giving the sequence \(5/6, 12/17, 29/40, ...\).

Homographic functions are easily inverted: if

\[ y = \frac{a x + b}{c x + d} \]

then

\[ x = \frac{-d y + b}{ c y + -a} \]

Ben Lynn blynn@cs.stanford.edu 💡