CORDIC in C and Javascript

 How CORDIC works
How CORDIC trig works, courtesy of Wikipedia.

CORDIC is is a complex of fast algorithms to calculate transcendental functions using only table lookup, addition and bit shifting. Here I take up Volder's original scheme from 1959 to calculate sines and cosines quickly (CORDIC stands for COordinate Rotation DIgital Computer). My original article from 1992 holds up reasonably well, The CORDIC Method for Faster sin and cos Calculations, except for the Borland graphics — nice pre-Windows, but long-gone. Converting to Visual C requires only one change to the core routines related to moving from 16-bit to 32-bit integers (see comment in code). I've bundled up the source files for anyone who wants to take them into a Visual C Win 32 application, and here is a minimal C implementation. Wikipedia has a first-class article on this subject with lucid explanations and many pertinent references. Volder himself published an illuminating retrospective in 2000, explaining that the initial motivation was digitizing the navigation system of Convair's B-58 jet bomber (Volder worked for Convair).

Hilbert Proves that e Is Transcendental

 David Hilbert in 1912
David Hilbert, 1862-1943 (photo 1912)

It's easy to prove that \( e \) is irrational, but takes more work to prove that it's transcendental, meaning not the root of any equation:

\[ \begin{equation}{a_0 + a_1 x + \cdots + a_{n-1} x^{n-1} + a_n x^n = 0, \hspace{16pt} a_i \in \mathbb{Z}.}\tag{1} \end{equation} \]

Put otherwise, it is never the case that \( a_n e^n + a_{n-1} e^{n-1} + \cdots + a_1 e + a_0 = 0 \; \) for any choice of integers \( a_0, a_1, \cdots, a_{n-1}, a_n \). We already have the result for \( n = 0 \) since \( e \) is not an integer, and for \( n = 1 \) since \( e \) is not the root of any equation \( bx - a = 0 \) for integers \( a \) and \( b \) (simply restating that \( e \) is irrational). In 1840, Liouville proved that \( e \) is not the root of a quadratic equation with integer coefficients, but the final object is to prove that \( e \) never satisfies \( (1) \) for any choice of integers \( a_i \) and any \( n = 0, 1, 2, 3, 4, \ldots \).

Liouville Proves That e Is Not the Root of a Quadratic Equation

 Liouville on e

More properly, \( e \) is not the root of a quadratic equation with integer coefficients, which is the same as saying rational coefficients, because denominators can be cleared. This is sometimes stated as \( e \) is not a quadratic irrationality. Liouville proved the theorem in his journal in 1840.[1] It was a step towards proving that \( e \) is transcendental, meaning that it is not the root of any polynomial equation with integer coefficients. Apparently Liouville regarded it as such[2], but couldn't push through the general proof. Hermite was the first to prove that \( e \) is transcendental in 1873.[3] Hilbert simplified the proof[4], among others. But let's get back to the quadratic case and Liouville's proof. Here it is in its entirety, translated into English:

Fourier's Proof that e Is Irrational

 de Stainville's Melanges

Janot de Stainville ascribes this proof to Joseph Fourier.[1] Start with:

\[ \begin{align*}
e &= \sum_{n=0}^\infty{1 \over n!}\\
&= 1 + {1 \over 1!} + {1 \over 2!} + {1 \over 3!} + {1 \over 4!} + \cdots\\
&\lt 1 + 1 + {1 \over 2} + {1 \over 2^2} + {1 \over 2^3} + \cdots\\
&= 3.
\end{align*} \]

The reason for the inequality is that each factorial is greater than a product of as many twos as there are factors in the factorial, discounting the one; for example, \( 4! = 1 \cdot 2 \cdot 3 \cdot 4 \gt 1 \cdot 2 \cdot 2 \cdot 2 = 2^3. \) The final step results from summing the geometric series. It follows that \( 2 \lt e \lt 3 \) and in particular, that \( e \) is not a whole number.

John Hidinger of the 8th Wisconsin Volunteer Infantry

 John Hidinger 1864
John Hidinger (Sep 11, 1839 — May 7, 1883). Photo 1864.

Sometimes mom talked about her grandfather's being in the Civil War and how he was captured and spent time in Confederate prisons. It was fascinating, especially considering that she was born and raised in Saskatchewan and was a naturalized American citizen. Her American roots are deep, going back to the founding of New Haven, Connecticut in the 1630s, but all that turned up later. Her grandfather John Hidinger was from Iowa, though born in Saxony and coming to the United States as a boy (the name was probably Heidinger in the old country). Imagine my surprise when rooting through the archives of the Wisconsin Historical Society to find that he served with the 8th Wisconsin Volunteer Infantry, the Eagle Regiment, so denoted because of their mascot, the eagle Old Abe, a sensation who occupied the Wisconsin state capitol for many years after the war, and whose replica overlooks the Wisconsin Assembly to this day. It tickled me as an anti-war protester on the University of Wisconsin campus circa 1970, because the right-wingers in the legislature would always call us out-of-town agitators. More like coming home I often thought when passing the Civil War cannon at Camp Randall where great-grandfather Hidinger trained, now a little park on the UW campus.

Eratosthenes Measures the Earth

 Eratosthenes Measures the Earth
Eratosthenes Measures the Earth

Like Aristotle[1] before him and in keeping with virtually all educated opinion in ancient Greece[2], Eratosthenes (c. 276 BC - c. 194 BC) assumed the earth was spherical. He set out to measure its size using a sound method that has stood the test of time. His results were good (about \( 5.4\% \) too high) and no one did better for over a thousand years. His near correct result was ignored well into the modern era, including by Columbus. Snellius and Picard finally nailed down the earth's circumference in the seventeenth century using Eratosthenes' experimental design and taking advantage of advances in mathematics and instrumentation that had accrued over the intervening \( 1900 \) years.

Chinese Remainder Theorem Calculator

 Chinese Remainder Theorem example

A system of three congruences is shown on the right, but start with the simpler system:
\[ \begin{align*}
x &\equiv 1 \hspace{-.6em} {\pmod{2}}\\
x &\equiv 2 \hspace{-.6em} {\pmod{3}}.
\end{align*} \]
Values congruent \( \text{mod} \; 6 \) are certainly congruent \( \text{mod} \; 2 \) and \( \text{mod} \; 3, \) so in looking for an \( x \) solving both congruences simultaneously, it suffices to consider congruence classes \( \text{mod} \; 6 \) and in particular their smallest positive residues, namely \( 0, 1, 2, 3, 4, 5. \) We're seeking an odd number among those \( 6 \) since \( x \equiv 1{\pmod{2}}, \) one that is also congruent to \( 2 \; \text{mod} \; 3. \) \( x = 1 \) won't do, since \( 1 \equiv 1{\pmod{3}} \) neither will \( x = 3, \) since \( 3 \equiv 0{\pmod{3}}. \) \( x = 5 \) is the solution, since it satisfies both congruences, and it is the only solution \( \text{mod} \; 6. \)

Euler Proves Fermat's Theorem on the Sum of Two Squares

 Novi Commentarii Front for 1758-1759

The theorem in question is:

If \( p \) is an odd prime with \( p \equiv 1 \; (\text{mod} \; 4), \) then \( p \) is the sum of two squares.\( (1) \)

Only if is easy, because for all natural numbers \( n, \; n^2 \equiv 0, 1 \; (\text{mod } 4), \) so \( n^2 + m^2 \equiv 0, 1, 2 \; (\text{mod} \; 4) \) and a sum of two squares cannot be congruent to \( 3 \; (\text{mod} \; 4). \) Obviously \( 2 = 1^2 + 1^2 \) as well. The Brahmagupta-Fibonacci identity assures that a product of sums of two squares is itself a sum of two squares:

\[ \begin{equation}{(a^{2}+b^{2})(c^{2}+d^{2})=(ac+bd)^{2}+(ad-bc)^{2}.}\tag{2} \end{equation} \]

Fermat Sum of Two Squares Calculator

 Sums of two squares
Integers under \( 40 \) that are the sum of two squares. \( \color{red}{25} \) is the first that is the sum of two squares in two ways.

\( 5 = 1^2 + 2^2 \) is the sum of two squares, \( 3 \) is not. Dealing with whole numbers only, including \( 0, \) it's a bit of a riddle coming up with the criterion distinguishing the two situations. Based on empirical investigations, mathematicians in the \( 17^\text{th} \) century found the key. According to Leonard Dickson[1]:

A. Girard (Dec 9, 1632) had already made a determination of the numbers expressible as a sum of two integral squares: every square, every prime \( 4n + 1, \) a product formed of such numbers, and the double of one of the foregoing.

The part about primes \( p \equiv 1 \; (\text{mod} \; 4) \) is central, because a product of two numbers each of which is the sum of two squares is itself the sum of two squares. Since \( 5 = 1^2 + 2^2 \) and \( 13 = 2^2 + 3^2, \) for example, \( 65 = 5 \cdot 13 \) is also the sum of two squares: \( 65 = 4^2 + 7^2. \) In fact there is a second representation: \( 65 = 1^2 + 8^2, \) and the number of representations is of interest too (this exact example is from Diophantus).

The Calculus of Finite Differences

 Differences of the cubes
Progressive differences of the first few cubes.

Write down the first few cubes, then put their differences \( \Delta \) in the second column, the differences of those differences \( \Delta^2 \) in the third column, and so on. Remarkably, \( \Delta^3 = 6 \), and that is true for any contiguous sequence of cubes (obviously \( \Delta^4 = 0 \)). Do that with the fourth powers and you find that \( \Delta^4 = 24, \) and in general for contiguous \( n^{th} \) powers, \( \Delta^n = n!. \) The key to unlocking this mystery is the Calculus of Finite Differences, out of vogue now apparently, but with a hallowed history going back to Newton and before and studied in depth by George Boole in 1860.[1] His book can still be read with profit, as can C. H Richardson's little text from 1954. [2]

Euler used \( \Delta^n x^n = n! \) in 1755 to prove the two squares theorem. Boole and those following him employed the term "calculus" advisedly, many theorems in the finite case matching similar ones in the familiar infinitesimal calculus. Which stands to reason, considering all there is in common, it's just that now \( \Delta x = 1. \)

Pages

Subscribe to Ex Libris RSS