Converting to Decimal

Suppose we want to output a decimal representation of \(\alpha = [a_1;a_2, ...]\). Then we start computing convergents via a table with one change: when the last two convergents have the same integer part \(n\), we output and subtract \(n\), then multiply the numerator by 10. We demonstrate this on \(\pi\), using the space-saving "/"-notation.

3

7

15

1

292

…​

0/1

1/0

3/1

22/7

The last two convergents both floor to 3, so we output 3, subtract it from both convergents to get \(0/1, 1/7\), and multiply by 10:

Output: \(3. ...\)

3

7

15

1

292

…​

0/1

1/0

0/1

10/7

We’ve deleted the old values for clarity, but in future we shall preserve them. Continuing for a few more steps:

Output: \( 3.141...\)

3

7

15

1

292

…​

0/1

1/0

0/1

10/7

150/106

180/113

30/7

440/106

20/7

160/106

Conversion From Decimal

Converting to a nonsimple continued fraction is immediate, as seen by the example of \(\pi\):

\[ \begin{aligned} \pi &=& 3 + \frac{1}{0 + \frac{10}{1 + \frac{1}{0 + \frac{10}{4 + ...}}}} \\ &=& 3 + \frac{1}{0 + \frac{1000}{141 + \frac{1}{0 + \frac{1000}{592 + ...}}}} \end{aligned} \]

Ben Lynn blynn@cs.stanford.edu 💡