Mathematics

Euler Proves Fermat's Theorem on the Sum of Two Squares

 Novi Commentarii Front for 1758-1759

The theorem in question is:

If \( p \) is an odd prime with \( p \equiv 1 \; (\text{mod} \; 4), \) then \( p \) is the sum of two squares.\( (1) \)

Only if is easy, because for all natural numbers \( n, \; n^2 \equiv 0, 1 \; (\text{mod } 4), \) so \( n^2 + m^2 \equiv 0, 1, 2 \; (\text{mod} \; 4) \) and a sum of two squares cannot be congruent to \( 3 \; (\text{mod} \; 4). \) Obviously \( 2 = 1^2 + 1^2 \) as well. The Brahmagupta-Fibonacci identity assures that a product of sums of two squares is itself a sum of two squares:

\[ \begin{equation}{(a^{2}+b^{2})(c^{2}+d^{2})=(ac+bd)^{2}+(ad-bc)^{2}.}\tag{2} \end{equation} \]

Fermat Sum of Two Squares Calculator

 Sums of two squares
Integers under \( 40 \) that are the sum of two squares. \( \color{red}{25} \) is the first that is the sum of two squares in two ways.

\( 5 = 1^2 + 2^2 \) is the sum of two squares, \( 3 \) is not. Dealing with whole numbers only, including \( 0, \) it's a bit of a riddle coming up with the criterion distinguishing the two situations. Based on empirical investigations, mathematicians in the \( 17^\text{th} \) century found the key. According to Leonard Dickson[1]:

A. Girard (Dec 9, 1632) had already made a determination of the numbers expressible as a sum of two integral squares: every square, every prime \( 4n + 1, \) a product formed of such numbers, and the double of one of the foregoing.

The part about primes \( p \equiv 1 \; (\text{mod} \; 4) \) is central, because a product of two numbers each of which is the sum of two squares is itself the sum of two squares. Since \( 5 = 1^2 + 2^2 \) and \( 13 = 2^2 + 3^2, \) for example, \( 65 = 5 \cdot 13 \) is also the sum of two squares: \( 65 = 4^2 + 7^2. \) In fact there is a second representation: \( 65 = 1^2 + 8^2, \) and the number of representations is of interest too (this exact example is from Diophantus).

The Calculus of Finite Differences

 Differences of the cubes
Progressive differences of the first few cubes.

Write down the first few cubes, then put their differences \( \Delta \) in the second column, the differences of those differences \( \Delta^2 \) in the third column, and so on. Remarkably, \( \Delta^3 = 6 \), and that is true for any contiguous sequence of cubes (obviously \( \Delta^4 = 0 \)). Do that with the fourth powers and you find that \( \Delta^4 = 24, \) and in general for contiguous \( n^{th} \) powers, \( \Delta^n = n!. \) The key to unlocking this mystery is the Calculus of Finite Differences, out of vogue now apparently, but with a hallowed history going back to Newton and before and studied in depth by George Boole in 1860.[1] His book can still be read with profit, as can C. H Richardson's little text from 1954. [2]

Euler used \( \Delta^n x^n = n! \) in 1755 to prove the two squares theorem. Boole and those following him employed the term "calculus" advisedly, many theorems in the finite case matching similar ones in the familiar infinitesimal calculus. Which stands to reason, considering all there is in common, it's just that now \( \Delta x = 1. \)

Invariant Factor and Elementary Divisor Calculator

 All Abelian groups of order 72
All Abelian groups of order 72.

The Fundamental Theorem of Finite Abelian Groups decisively characterizes the Abelian finite groups of a given order. Its remote origins go back to Gauss in the Disquisitiones Arithmeticae in 1801 and it was nailed down by Schering (1869) and by Frobenius and Stickelberger (1879)[1]:

Fundamental Theorem of Finite Abelian Groups

Let \( G \) be a finite Abelian Group of order \( n. \) Then: \[ \begin{equation}{G \cong \mathbb{Z}_{n_1} \times \mathbb{Z}_{n_2} \times \cdots \times \mathbb{Z}_{n_s},} \tag{1} \end{equation} \] where \( s \) and the \( n_i \) are the unique integers satisfying \( s \geq 1, n_i \geq 2 \) for all \( i, \) and \( n_{i+1} \; | \; n_i \) for \( 1 \leq i \leq s - 1. \) And also: \[ \begin{equation}{G \cong \mathbb{Z}_{p^{\beta_1}} \times \cdots \times \mathbb{Z}_{p^{\beta_t}} \times \cdots \times \mathbb{Z}_{q^{\gamma_1}} \times \cdots \times \mathbb{Z}_{q^{\gamma_u}},} \tag{2} \end{equation} \] for \( p \) and \( q \) and all the other primes dividing \( n, \) again in a unique way, where \( \sum \beta_i \) is the exponent of the greatest power of \( p \) dividing \( n, \) \( \sum \gamma_i \) is the exponent of the greatest power of \( q \) dividing \( n, \) and so on for all the other primes dividing \( n. \)

Gabriel Cramer on Cramer's Rule

 Cramer states Cramer's Rule
Click image for original.
De l'évanouissement des inconnues
(On the Vanishing of Unknowns)
Appendix to Introduction à l'analyse des lignes courbes algébriques (1750)
(Introduction to the Analysis of Algebraic Curves)
by Gabriel Cramer

When a problem contains several unknowns whose relationships are so complicated that one is obliged to form several equations; then, to discover the values of the unknowns, one makes all of them vanish, except one, which combined only with known quantities, gives, if the problem is determined, a final Equation, whose resolution reveals this first unknown, and then by this means all the others.

Basic Ring Theory Exam

Sections from Dummit & Foote being tested on the midterm:
  • §7.5 — 7.6
  • §8.1 — 8.3
  • §9.1 — 9.5
  • §10.1 — 10.3

Here are the midterm and final exams for Math 542, Modern Algebra, at the University of Wisconsin-Madison in the spring semester 2015-2016, Professor Paul Terwilliger officiating. It is an undergraduate class, junior or senior level, for (mostly) math majors. The class takes up basic ring theory, following on Math 541, which is mostly group theory. The text, followed pretty closely, is Abstract Algebra, 3rd ed., by David S. Dummit and Richard M. Foote (Wiley, 2004) — D&F — widely used it appears. It certainly is comprehensive, with many examples and a great set of exercises, and is an impressive work in its own right, well-organized, demanding, and thorough. Typo-free as well, I haven't found a single one.

Robert E. Lee Moore -- Topologist and Racist

 Robert E. Lee Moore
Robert E. Lee Moore (1882-1974)

It's a statement when someone names their child after Robert E. Lee, a man who did his best to destroy the United States in order to preserve slavery. Robert E. Lee was lionized more in death than in life, a paragon of the Lost Cause, the glorious if doomed rebellion of a brave people who wanted nothing but to be left alone, crushed by the soulless and brutal industrial juggernaut (Sherman's march to the Sea!). It's the big lie, forwarded for 150 years to defend the indefensible. What a wretched history of oppression, assiduously rebuilt over the generations by people like Moore, Sr. and his illustrious and vicious son Robert E. Lee Moore. The Compromise of 1877, peonage, disenfranchisement, lynching, Plessy v. Ferguson, Jim Crow, the Dunning school false flag on reconstruction. Read the old classics by W. E. B. Du Bois, Eric Foner, and C. Vann Woodward (himself a son of the south), among others, if you still doubt the long-standing construction and reconstruction of anti-black racism in this country down through the generations since 1865.

Riesz Proves the Riesz Representation Theorem

 Frigyes Riesz
Frigyes Riesz.

The Riesz Representation Theorem is a foundation stone of 20th century functional analysis. Generalized almost beyond recognition, Frigyes (Frédéric) Riesz originally proved the theorem in 1909 for \( C[0,1] \), the continuous real-valued functions on \( [0,1] \):

If \( \mathcal{A} \) is a bounded linear functional on \( C[0,1] \), then there is a function \( \alpha \) of bounded variation on \( [0,1] \) such that for all \( f \in C[0,1] \): \[ \mathcal{A}[f(x)] = {\int_0^1 f(x)d\alpha(x).} \hskip{60pt} (1) \]

In this article, I propose to retrace Riesz's original proof in Sur les opérations fonctionnelles linéaires[1] in 1909, augmenting with his discussion in Sur certains systèmes singuliers d'équations intégrales[2] in 1911 where appropriate.

Riesz's Sur certains systèmes singuliers d'équations intégrales

 Riesz's 1911 paper on the Riesz Representation Theorem
F. Riesz proves the Riesz Representation Theorem (1911) — click image for original.
Sur certains systèmes singuliers d'équations intégrales
(On some noteworthy systems of integral equations)
by Frédéric Riesz, à Budapest.
Annales scientifiques de l'É.N.S., 3rd series, volume 28 (1911), p. 33-62.

In what follows, the functions of bounded variation will play a leading role. We know the importance of this class of functions defined by M. Jordan, whose most remarkable properties become almost obvious after only one statement: that every real function of bounded variation is the difference of two bounded, never decreasing functions.

Riesz's Sur les opérations fonctionnelles linéaires

 Riesz's 1909 paper on the Riesz Representation Theorem
F. Riesz announces the Riesz Representation Theorem (1909) — click image for original.
Sur les opérations fonctionnelles linéaires
(On linear functional operations)
by Frédéric Riesz
Comptes rendus hebdomadaires des séances de l'Académie des sciences, 149 (1909), p 974–977

To define what is meant by a linear operation, it is necessary to specify the domain of the functional. We consider the totality of all real continuous functions \( \Omega \) between two fixed numbers, for example between \( 0 \) and \( 1 \); for this class, we define the limit function based on the assumption of uniform convergence. The functional operation \( \text{A}[f(x)] \), which associates to each element of \( \Omega \) a corresponding real number, will be called continuous if when \( f(x) \) is the limit of \( f_i(x) \), then \( \text{A}(f_i) \) tends to \( \text{A}(f) \). Such a distributive and continuous operation is said to be linear. It is easy to show that this operation is bounded, that is to say, there is a constant \( M_A \) such that for every element \( f(x) \) we have

\[ \begin{equation}{|\text{A}[f(x)]| \leq M_A \times max. |f(x)|.} \tag{1} \end{equation} \]

Pages

Subscribe to RSS - Mathematics