Computing With Continued Fractions
Ultimately, regardless of the nature of the numbers involved, we must represent them with a finite number of bits. Once we leave the comfort of $\mathbb{Z}$, we have several options:

Fixedpoint arithmetic: Plain integers, only scaled.

Rationals: Wellsuited for some domains, and for a cost, our results are always exact. Sadly insufficient for many problems: even $\sqrt{2}$ cannot be represented.

Floatingpoint arithmetic: A common choice. Easy to understand, but error analysis is annoying. Also enslaved by base 2: even 1/3 cannot be represented exactly.

In some applications we can, for example, represent $a + b\pi$ as $(a, b)$, which is exact and may suit the problem at hand. But let’s assume at some stage we need to mix oil and water.
How about continued fraction representation? Some features:

All rationals exactly represented.

With a little state, we can iteratively generate more precision. For example, $\sqrt{2} = [1;2,...]$ outputs 1 the first time and then 2 every subsequent time we ask for more precision. With floatingpoint, we would have to redo the entire computation at higher precision to find the next significant bit of $\sqrt{2}$.

For a wide variety of rational, algebraic and transcendental numbers, this is as close as we can get to an exact representation. With the above scheme, at all times, we have given results to some precision and have stored enough state to arbitrarily increase the precision of our results without “redoing from start”.

Threadfriendly: operations are like Unix pipes that only need some of the input terms to generate the next output term.
Gosper also describes a related option: continued logarithms, which I may describe later.
Algorithms for Continued Fractions
Representing numbers is half the problem. We must also devise efficient algorithms for computing on them.
Inverting a continued fraction is trivial: if $a_1 = 0$, then remove this term, otherwise insert $0$ at the beginning of the sequence.
Comparison of two continued fractions $[a_1; a_2, ...]$ and $[b_1; b_2, ...]$ is performed by comparing $a_i , b_i$ where $i$ is the smallest index with $a_i \ne b_i$. Flip the result if $i$ is even. If one sequence ends before the other, then treat the missing term as $\infty$.
Basic binary operations on continued fractions are tougher, but we find a way. For now, let us restrict ourselves to operations between a continued fraction and an integer, such as multiplication or addition by an integer. These are special cases of:
Homographic Functions
A homographic function, or MÃ¶bius transformation, is a function of the form
We’ve seen the “magic table” method for computing convergents. With a small tweak, we can compute $f(\alpha)$ for a homographic function with integer coefficients and any $\alpha = [a_1;a_2,...]$.
Normally we set $p_{1} = 0, p_0 = 1$. If we start instead with $b, a$, induction shows instead of $p_i$, the recurrence gives us $a p_i + b q_i$. Thus if we initialize the two rows of the table with $b,a$ and $d,c$ rather than $0,1$ and $1,0$, we can compute the sequence of approximations $ \frac{a (p_i / q_i) + b}{c (p_i /q_i) +d} $ to $ \frac{a \alpha + b }{c \alpha + d}$, where $\alpha= [a_1;a_2,...,]$.
For example, suppose we wish to compute $\frac{2\sqrt{2}+3}{5\sqrt{2}+1}$. Then we start our table as follows:
and compute as usual:
giving the sequence $5/6, 12/17, 29/40, ...$.
Homographic functions are easily inverted: if
then