History of Calculus: Newton v Leibniz

I was listening to a podcast about the bitter rivalry between Isaac Newton and Gottfried Leibniz the other day. Although I’ve introduced calculus for the first time to a number of students over the years, I’ve never really though to consider the historical significance and controversies of this innovation.

In fact, what we teach as calculus today was not fully understood for many years. Although, now they seem related, Newton and Leibniz were trying to address different problems (and functions were not in use at the time)

Teachers and students are very familiar with questions of the form, if the equation of a curve is y=x^3-2x+1 find the gradient at the point (2,1). Although not in this form, the problem of finding an exact gradient (or drawing an exact tangent) to a curve at a particular point was a famous mathematics problem of the time and something the Ancient Greeks had been interested in. Newton claims to have first found a method for finding the exact gradient.

Newton claims he first invented his ideas about differential calculus in 1666 through fluxions and fluents which were not considered the same as Leibniz’s work. A “fluent” was a quantity that was varying with time (we now consider this a function of time), for example the displacement of a body in motion is a fluent. Newton’s work on differential calculus was concerned with finding the “fluxion” of a fluent. This was the rate of change of the fluent at a particular point. For example, the fluxion of a displacement fluent gives the velocity. Newton’s dot notation is still in use today \dot{x}=\tfrac{dx}{dt} and \ddot{x}=\tfrac{d^2x}{dt^2} (as well as lesser used notation, \dot{\ddot{x}} anybody?)

Leibniz was the first to publish work about calculus in 1684. He was considering the range of variables x and y infinitely close to each other. His work on infinitesimals and much of his notation is familiar to use today. Leibniz recognised the need for an operator and, in his work on integration, introduced the elongated ‘S’ (summation) as well as writing notation for infinitesimals such as ‘dx’.

Leibniz called his work on integrals “calculus summatorius“. He first used the elongated ‘S’ (summation) for integration on 29 October 1675. As you can see, much of his work and notation is recognisable to that we use today.

Photo from Stephen Wolfram’s blog

The use of infinitesimals caused problems and many at the time were not happy with their usage. Newton would have argued along the lines of

(x+o)^n=x^n + nox^{n-1}+\frac{n(n-1)}{2}o^2 x^{n-2}+...

Then, when comparing the change from x^n to (x+o)^n let higher terms of o vanish.

Whilst recognising this work led to correct results, Bishop George Berkley was unimpressed by the lack of rigour in using infinitesimals.

“They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them ghosts of departed quantities?”

George Berkley (1734) in Section XXXV of “The analyst: A discourse addressed to an infidel mathematician”

It would be another 100 years before the works of Cauchy, Weierstrauss and Riemann would formally overcome these concerns by redefining calculus in terms of limits.

Theory behind calculus (Pt. II)

In my previous post I extolled the virtue of adding more of the underlying theory of calculus to the reformed Maths A Level Subject Requirements:

  • “Understand and use (…) differentiation from first principles”
  • “Understand and use integration as the limit of a sum” and to “estimating the approximate area under a curve and limits that it must lie between”

Last time I looked at differentiation from first principles. Now comes the turn of integration.

Riemann integrals and Riemann sums

Previously part of Further Maths A Level for some boards, all A Level Maths students must now have a loose understanding of Riemann sums through approximation via rectangles.

Screenshot of dynamic Geogebra applet to illustrate Riemann sums (click image)

If, like the example above, the function is always increasing (or always decreasing) then you can easily construct rectangles of equal width that will give you a lower bound for the area. Similarly, you can take rectangles of equal width and a different height to give you an upper bound for the area.1

For any increasing function, the limits using rectangles can be taken as:

\displaystyle \int_a^b f(x) \, dx > h \times \left[f(a)+f(a+h)+f(a+2h)+...+f(b-2h)+f(b-h)\right] and

\displaystyle \int_a^b f(x) \, dx < h \times \left[f(a+h)+f(a+2h)+...+f(b-h)+f(b)\right]

The Riemann integral can then be defined as the limit, L because both h \times \left[f(a)+f(a+h)+f(a+2h)+...+f(b-2h)+f(b-h)\right]\to L and h \times \left[f(a+h)+f(a+2h)+...+f(b-h)+f(b)\right] \to L as h\to 0

Sometimes it is easy to evaluate these limits as polynomials in h. For n rectangles of equal width, the below example gives us the definite integral of y=x^2 from 0 to 1.

\displaystyle \frac{1}{n}\left[0^2+\left(\frac{1}{n}\right)^2+\left(\frac{2}{n}\right)^2+...+\left(\frac{n-2}{n}\right)^2+\left(\frac{n-1}{n}\right)^2\right]<\int_0^1 x^2 dx<\frac{1}{n}\left[\left(\frac{1}{n}\right)^2+\left(\frac{2}{n}\right)^2+...+\left(\frac{n-1}{n}\right)^2+1^2\right]

\displaystyle \frac{1}{n}\sum_{r=0}^{n-1} \left(\frac{r}{n}\right)^2<\int_0^1 x^2 dx<\frac{1}{n}\sum_{r=1}^{n} \left(\frac{r}{n}\right)^2

\displaystyle \frac{1}{n^3}\frac{1}{6}n(n-1)(2n-1)<\int_0^1 x^2 dx<\frac{1}{n^3}\frac{1}{6}n(n+1)(2n+1)

\displaystyle \frac{1}{6}\left(1-\frac{1}{n}\right)\left(2-\frac{1}{n}\right)<\int_0^1 x^2 dx<\frac{1}{6}\left(1+\frac{1}{n}\right)\left(2+\frac{1}{n}\right)

so \displaystyle \int_0^1 x^2 dx \to \frac{1}{3} as \displaystyle n \to \infty

On could do similar things with other polynomials; however, to obtain an exact answer one must know how to evaluate the \displaystyle \sum_{r=1}^{n} r^k in order to take a limit. This also may not work for other functions that can be integrated analytically which do not have a limit for \displaystyle \sum_{r=1}^{n} f(r). For other integrals, we may only get a numerical limit.

Fermat’s method of integration

Pierre de Fermat used a different infinite series of rectangles whose widths were in a geometric progression to compute the summation or ‘integral’ of y=x^k exactly for any k\in {\mathbb Q}^{+}.

Screenshot of Geogebra applet to illustrate Fermat’s method of integration sums (click image)

Fermat was the first to consider the area underneath a curve as an infinite series. Archimedes had found the area inside a parabola geometrically in the 3rd Century BC. Bonaventura Cavalieri made strides towards modern integral calculus in the mid 17th Century with his quadrature formula evaluating \int x^k \, dx for k=1,2,\dots,9. Evangelista Torricelli extended this work and then Sir John Wallis used mathematical induction in his work on series and rational powers to find \int x^k \, dx for k\in {\mathbb Q}^{+}. Fermat later did this using an infinite series. However, it was Newton and Leibniz who then related integration to differentiation which started the field of calculus.

Footnote

    1. If a function changes between increasing and decreasing then we just need to make sure we change the rectangles so that one set are all below the curve and form a lower bound and the others, all above the curve, form an upper bound. Within each interval width these values are usually described as ‘inf’ (infimum) and ‘sup’ (supremum).