What
I aim to do in this post is explain why I think the de-politicisation
of mathematics has been a bad thing. I have written about how before the Second World War prominent mathematicians were active in public life, a phenomena unheard of today. It might be helpful to clarify what I mean by ‘political’, from Greek: πωλιτικωσ politikos, meaning “of, for, or relating to citizens”, it does not imply ideological. For me, the first indication
that the de-politicisation of mathematics is a bad thing comes from where I, instinctively, see the root cause: in the traumas of the
collapse of the attempt to establish the logical foundations of
mathematics and the First World War. I think its difficult for good to flow from tragedy. The ultimate
indication that the de-politicisation was damaging is that I think
that it contributed to the Financial Crisis of
2007-2009 by creating a myth of the infallibility of mathematics.
I start with DavidHilbert who had been born in Königsburg, the Prussian city of Emmanuel Kant and Euler’s bridges, in 1862 and where he completed his doctorate in 1885. Ten years later he was appointed the professor of mathematics at the University of Göttingen, the centre of German mathematics, and in 1899 Hilbert published Grundlagen der Geometrie (‘Foundations of Geometry’), which placed non-Euclidean geometry on a basis of 21 axioms. At the time a number of people, including the British mathematician and philosopher Bertrand Russell, were working on establishing the ‘pure truth’ of mathematics by placing it on a firm logical basis and Hilbert’s work on laying the foundations of geometry was part of this broader effort put mathematics into a clear, consistent, framework.
I start with DavidHilbert who had been born in Königsburg, the Prussian city of Emmanuel Kant and Euler’s bridges, in 1862 and where he completed his doctorate in 1885. Ten years later he was appointed the professor of mathematics at the University of Göttingen, the centre of German mathematics, and in 1899 Hilbert published Grundlagen der Geometrie (‘Foundations of Geometry’), which placed non-Euclidean geometry on a basis of 21 axioms. At the time a number of people, including the British mathematician and philosopher Bertrand Russell, were working on establishing the ‘pure truth’ of mathematics by placing it on a firm logical basis and Hilbert’s work on laying the foundations of geometry was part of this broader effort put mathematics into a clear, consistent, framework.
In
1900, at the second World Conference of Mathematics held in Paris,
Hilbert presented an optimistic vision of how mathematics would
develop and set 10 (later extended to 23) problems to be solved by
mathematicians. In particular he made the claim that “In
mathematics there is no ignorabimus” [6,
p 445] (‘ignorabimus’ means ‘will not know’ and sets a
boundary on knowledge).
Problems
with the logical foundations of mathematics emerged in the first
decade of the twentieth century with Russell’s Paradox and
Hilbert’s response was to create the ‘Hilbert program’ in 1920,
the search for a finite set of consistent axioms at the root of
(existing) mathematics, an Elements
or Grundlagen
der Geometrie for all
of mathematics. In a paper he presented in 1917, Axiomatisches
Denken (‘Axiomatic
Thinking’), he argued that at the heart of many fields that
mathematics was concerned with there were the axioms
If we consider a particular theory more closely, we always see that a few distinguished propositions of the field of knowledge underlie the construction of the framework of concepts, and these propositions then suffice by themselves for the construction, in accordance with logical principles, of the entire framework.…The procedure of the axiomatic method, as it is expressed here, amounts to a deepening of the foundations of the individual domains of knowledge — a deepening that is necessary for every edifice that one wishes to expand and build higher while preserving its stability. ...If the theory of a field of knowledge—that is, the framework of concepts that represents it—is to serve its purpose of orienting and ordering, then it must satisfy two requirements above all: first it should give us an overview of the independence and dependence of the propositions of the theory; second, it should give us a guarantee of the consistency of all the propositions of the theory. In particular, the axioms of each theory are to be examined from these two points of view. [3, pp 1108–1109]
Hilbert
was arguing that the axiomatic method was needed at that time for the
same reason that Cauchy, who had imposed rigor on mathematics
following the French Revolution, had been needed a hundred years or
so earlier: mathematics had developed so quickly in the sixty years
after Riemann that it needed to stop and take stock. Hilbert’s
‘formalism’ provides a mechanism for generating sound
mathematics,
mathematical proofs are thus seen as a vehicle for making truth flow from axioms to theorems via logical deductions as sanctioned by rules of logic [11, p 292]
I get the sense that in years after his nation's defeat Hilbert is struggling to make sense of a rapidly changing world. Although he focuses on the mathematics, the changes in mathematics had, by and large, occurred before 1908 and I think his search for order in mathematics was a projection of his search for order in society. Isn't cod psychology marvelous.
The process involved in axiomatisation turns mathematics, in Hilbert's own words, into “a game played according to certain simple rules with meaningless marks on paper.” To more intuitive mathematicians, like Poincaré, it turned mathematics into a machine, sucking the inspiration out of it, “the assumptions were put in at one end, while the theorems came out at the other, like the legendary Chicago machine where the pigs go in alive and come out transformed into hams and sausages”.
The process involved in axiomatisation turns mathematics, in Hilbert's own words, into “a game played according to certain simple rules with meaningless marks on paper.” To more intuitive mathematicians, like Poincaré, it turned mathematics into a machine, sucking the inspiration out of it, “the assumptions were put in at one end, while the theorems came out at the other, like the legendary Chicago machine where the pigs go in alive and come out transformed into hams and sausages”.
What
is more, the language of the formalists, ‘symbolic logic’,
becomes so rarefied that only mathematicians can understand it and
this raises a philosophical question as to whether there exists a
better language to express mathematics in, and a psychological one,
why do we believe in the language. Poincaré’s intuitive approach
to mathematics, that it is exact and true as a consequence of the
human intellect, was taken up by a Dutch mathematician Bertus Brouwer.
Brouwer sees the roots of intuitionism as in the rational response to
the collapse of Kant’s ‘neo-Platonic’ idea that some concepts,
such as the axioms of mathematics, exist independent of experience
that seemed to fall apart when it became apparent that Euclidean
geometry was not the be-all and end-all of geometry in the 1850s [3,
pp 1171–1172]. The ‘neo-intuitionists’
can never feel assured of the exactness of a mathematical theory by such guarantees as the proof of its being non-contradictory, the possibility of defining its concepts by a finite number of words, or the practical certainty that it will never lead to a misunderstanding in human relations. [1, p 86]
A
consequence of this was a rejection of the Law of Excluded Middle.
Laplace had said that a mathematical statement must be written in a
way that meant it could be proved to be true or false, the ‘Law of
the Excluded Middle’, where the middle ground, ambiguity, was out
of bounds. Technically, a statement is either true or its negation is
true. Either ‘this cat is red’ or ‘this cat is not red’ is
true. The foundations of mathematics had been rocked by Cantor late
in the nineteenth century with the introduction of infinite sets,
which are critical because a continuum is an infinite set. Brouwer
argued that statements like “there is a sequence of 100 9’s in
the decimal expansion of pi” (which as an irrational number has an
infinite number of digits) can not be proved to be false (it can be shown to be true if the sequence is found, but it will take an infinitely long time to search the full sequence). In
mathematics if we cannot rely on the Law of Excluded Middle and we
cannot rely on truth/falsity of mathematical statements that apply to
continuous phenomena, then how can we rely on any scientific
statements to be true or false? Hilbert’s reaction was “Taking
the principle of excluded middle from the mathematician would be the
same, say, as proscribing the telescope to the astronomer or to the
boxer the use of his fists”. And Hilbert is right, we can admire the honesty of the Intuitionists in the same way we can admire the simplicity of the Amish; but we wouldn't like to live like them.
In the end, it was a Platonist, Kurt Gödel, who believed in God and that mathematics exists independently of human thought, who showed that Hilbert’s Program could not be completed, first in a lecture in Hilbert’s home-town of Königsburg and then in a formal paper published in 1931, ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems’.
France’s
most prolific mathematician of the twentieth century was Nicolas Bourbaki. France did not protect its intellectuals during the First
World War, on the principle that all citizens are equal, and as a
consequence it lost a generation of mathematicians and by the
mid-1930s most of the living lecturers were about to retire. In 1934,
dissatisfied with the quality of the text books being used by this
‘old–guard’ [14,
p 104–105] Bourbaki decided to take matters in hand and do for
post-Cantor, post-Riemann mathematics what Euclid’s The
Elements or
Fibonacci’s Liber
Abaci had done
centuries earlier. He would produce a definitive series of text books
for modern mathematics, starting with Set Theory, followed by
Algebra, Topology, Functions, Vector Spaces and finishing with
Integration. This had
to be done, because
the innovations of the late nineteenth century had been so profound
and, just as Hilbert had realised, mathematics needed to be placed on
a stable and coherent framework, if it was not it would lose its
status as ‘the art of certain knowledge’.
But
Bourbaki worked in the aftermath of Gödel and so the
axiomatic approach of Hilbert seemed vain. Bourbaki’s
approach was to start from extremely abstract generalisations and only
when these had been discussed in detail would special cases,
real-world applications of mathematics, be introduced. As Roy
Weintraub has written
Bourbaki came to uphold the primacy of the pure over the applied, the rigorous over the intuitive, the essential over the frivolous [14, p 102]
Bourbaki
was the twentieth-century successor to Plato and Kant in that the focus was on the generalisations, the Idealised Forms of mathematics. The influence of Bourbaki would reach its peak in the fifties and sixties, not just in his native France but significantly in the United States.
It is
difficult to give any biographical details of Bourbaki for the simple
reason that he did not exist. Nicolas Bourbaki is the collective
pseudonym for a group of French mathematicians, including Henri Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonne, SzolemMandelbrot (uncle of Benoît Mandelbrot), René de Poussel and AndreWeil (brother of the Socialist-Christian philosopher Simone), who
were associated with the École
Normale Supérieure. Generally coming from the educated upper-middle class, fathers were university lecturers rather than teachers, they were almost caricatures of French intellectuals (apart from Poussel who left the group early). They came up with their plan to rejuvenate mathematics at the Café
Capoulade, on the corner of the Boulevard Saint-Michel and Rue
Soufflot in Paris’ Latin Quarter. The plan was to operate as a
closed ‘secret society’ and produce textbooks employing very
precise language and strict formats [14,
p 105]. The process of producing the texts was collaborative, and
therefore slow and cumbersome. Individuals would write a chapter
which was ‘read’ to the whole group, usually at a summer
‘congress’. The group would then tear apart the first draft, and
the process repeated until the chapter was unanimously approved. The
first book appeared in 1939, with twenty one volumes of part I, “The
Fundamental Structures of Analysis” being completed in the late
1950s. By this time mathematics was growing faster than the 8-12
years it took Bourbaki to write a book and through the 1960s the
group imploded.
What I
find interesting is that the people who made up the core of Bourbaki had come from families who owed their position to the French state, while the movement grew and thrived during a time of
incredible political turmoil in France. Between 1920 and the German occupation in 1940, France had over 30 different governments, the longest
was Daladier’s 1938 regime that lasted almost two years. Then
there was the traumatic occupation followed by another twenty seven
governments between 1945 and the establishment of the Fifth Republic
in 1958 in the aftermath of a ‘coup’ that recalled de Gaulle from
political exile. The First World War was cataclysmic for France and
the country took forty years to start a full recovery. As a perfidious Englishman I must admit to admiration for what the country has
achieved in recovering from the mess it was in in 1962 (when it
withdrew from Algeria and focused on Metropolitan reconstruction).
Ian
Stewart, whose first book describing mathematics to
non-mathematicians, Concepts
of Modern
Mathematics, was an
exposition of Bourbaki mathematics (as explained in the Preface to
the Dover Publications edition) notes that the Bourbaki approach was
doomed to failure.
It was a great technique, but it had its limitations—the main one being that it tended to ignore unusual special cases, odd little results about just one example. It was a bit like a general theory of curves that, because it considered circles to be just another special case of much more complicated things, hadn’t appreciated the importance of π. [13, p 497]
and,
because of this obsession with the abstract over the practical,
by the end of the sixties, mathematics and physics departments were no longer on speaking terms. [13, p 496]
In
1992 the Nobel Prize winning theoretical physicist, Murray Gell-Mann
explained what had happened.
[Bourbaki teaches] a kind of neo-Kantian philosophy in which the laws of nature are nothing but Kantian “categories” used by the human mind to grasp reality …that the structures and objects of mathematics have a reality, that they exist in a sense, somewhere beyond space and time. [5, p 7]
This
said, Gell–Mann paints a more optimistic picture
abstract mathematics reached out in so many directions and became so seemingly abstruse that it appeared to have left physics far behind, so that among all the new structures being explored by mathematicians, the fraction that would even be of any interest to science would be so small as not to make it worth the time of a scientist to study them.
But all that has changed in the last decade or two. It has turned out that the apparent divergence of pure mathematics from science was partly an illusion produced by obscurantist, ultra-rigorous language used by mathematicians, especially those of a Bourbaki persuasion, and their reluctance to write up non–trivial examples in explicit detail. When demystified, large chunks of modern mathematics turn out to be connected with physics and other sciences, and these chunks are mostly in or near the most prestigious parts of mathematics, such as differential topology, where geometry, algebra and analysis come together. Pure mathematics and science are finally being reunited and mercifully, the Bourbaki plague is dying out. [5, p 7]
What I
see in both the Hilbert and the Bourbaki approaches to mathematics,
as well as in the attitudes of mathematicians who emerged in
post-Stalinist Soviet science, is a desire to escape the turbulent political realities that surrounded mathematicians. In response to
the turmoil around them mathematicians seem to wish to
create their own Castalia, a place free of politics or economics where the cerebral mathematicians could focus on playing their 'game', as described in Hermann Hesse’s The Glass Bead Game.
Why
this is significant is encapsulated in parts of both the US Financial
Crisis Inquiry Commission Report and the report of the British Parliamentary
Report on Banking Standards ([4,
p 44], [10,
para 60, vol 2]): today economic authority is based on mathematics.
Financial economics produced sophisticated mathematical theorems
related to pricing and risk management in the derivative markets and simply by existing as mathematics they were legitimate. There was
no room for debate or discussion because mathematics, based on
Hilbert’s formal deduction and Bourbaki’s idealised abstractions, and written in obscure notation, was infallible. It doesn't seem to
matter that there were discussion and concerns within mathematics,
economics accepted the authority of the theorems and their models
simply because they were mathematical.
I have
not yet come across what I feel is a credible reason why economics
has become so enamored with formalist mathematics. Lawson [7, Ch 10] argues it
is because mathematics confers authority, but gives no explanation as
to why mathematics should have this power. Lawson challenges the power but one senses that he feels mathematics
exists independently of human thought, and this mathematics is
irrelevant to social phenomena. He does not seem to think that
mathematics could be a product of economic intuition, not just
physical intuition. Weitntraub [14]
offers a narrative of how mathematical ideas crossed over into
economics, without giving what I think is a compelling argument as to
why mathematical formalism became so significant. Mirowski argues
that ‘Cyborg science’, did not spontaneously emerge but was
“constructed by a new breed of science managers” [9,
p 15] and it was these managers that promoted the mathematisation of
economics. While the emergence of ‘Cyborg science’ as a dominant
theme of post-war science may well have been constructed, there is
something spontaneous in Wiener Turing and Kolmogorov, the leading
twentieth century mathematicians of the US, UK and USSR, all
independently having a youthful interest in biology, becoming
mathematicians and making contributions in probability and going on
to work in computation.
My own
belief is that the critical process was the interaction between
(particularly American) economists and mathematicians in the Second
World War working on problems of Operations Research. At the outbreak
of the war in 1939 the vast majority of soldiers and politicians
would not have thought mathematicians had much to offer the war
effort, the attitude among the military is still often that “war
is a human activity that cannot be reduced to mathematical formulae”
[12,
p 3]. However, operational researchers had laid the foundations for
Britain’s survival in the dark days of 1940-1941, Turing and his
code-breakers had enabled the allies to keep one step ahead of the
Nazis and Allied scientists had ensured that the scarce resources of
men and arms were effectively allocated to achieving different
objectives. Alan Bullock argues that the blitzkrieg was the only
military tactic available to the Nazis, since they had neither the
capability nor the capacity to manage more complex operations [2,
pp 588–594] . By the end of the war, it could be argued that the
war had been won as much through the efforts of awkward engineers as
square-jawed commandos and the Supreme Commander of Allied Forces in
Europe and Chief of the U.S. Army, General Eisenhower was calling for
more scientists to support the military [12,
p 64].
It is
hardly surprising that in the post-war years economists embraced
mathematics. Pre-war generals would have made the same sort of
objections to mathematics that economists had. However, after the war
the success of Operations Research could be compared to the failure
of economists in the lead up to, and in the aftermath of, the Great
Depression that had dominated the decade before the war. But possibly
more significant than this theory is the fact that many post-war
economists had worked alongside mathematicians on military and
government policy problems during the war. Samuelson who was
instrumental in introducing stochastic calculus had worked in
Wiener’s lab at MIT addressing gun-control problems during the war
[8,
p 63–64].
Personally
I feel prominent economists became over awed by the successes of
mathematics, through, for example, observing mathematicians’
abilities to transform apparently random sequences of letters into
meaningful messages, something that must have seemed magical and
resonant to the economic problem of interpreting data. The problem is
codes are generated deterministically but the same cannot be said
for economic data. I believe it was a synthesis of the post First
World War traumas of mathematics and the post-Second World War optimism and confidence of economics that created the explosion of
mathematical economics in the 1950s-1960s.
Today mathematical finance is possibly the most abstract branch of applied mathematics, while mathematical physics is complex it is still connected to sensible phenomena and amenable to intuition, and this state seems to be atypical of the relationship between mathematics and economics which is concerned with more abstract phenomena. The situation is not irrecoverable, but, as I have said before, it requires a much tighter integration of non-mathematical economists and un-economic mathematicians. I look with envy at my colleagues carrying out research in biology using the same mathematical technology I use but, as one said recently, their papers do not need to prove a theorem and clear results are admired, not technical brilliance.
Today mathematical finance is possibly the most abstract branch of applied mathematics, while mathematical physics is complex it is still connected to sensible phenomena and amenable to intuition, and this state seems to be atypical of the relationship between mathematics and economics which is concerned with more abstract phenomena. The situation is not irrecoverable, but, as I have said before, it requires a much tighter integration of non-mathematical economists and un-economic mathematicians. I look with envy at my colleagues carrying out research in biology using the same mathematical technology I use but, as one said recently, their papers do not need to prove a theorem and clear results are admired, not technical brilliance.
References
[1]
L. E. J. Brouwer. Intuitionism and formalism. Bulletin
of the American Mathematical
Society,
20(2):81–96, 1913.
[3]
W. B. Ewald. From
Kant to Hilbert: A source book in the foundations of mathematics,
volume II.
Oxford University Press, 1996.
[4]
FCIC. The Financial Crisis Inquiry Report. Technical report, The
National Commission on the Causes of the Financial and Economic
Crisis in the United States, 2011.
[5]
M. Gell-Mann. Nature conformable to herself. Bulletin
of the Santa Fe Institute,
7(1):7–8, 1992.
[6]
D. Hilbert. Mathematical problems. Bulletin
of the American Mathematical Society,
8(10):437–479, 1902.
[9]
P. Mirowski. Machine dreams: Economic agents as cyborgs. History
of Political Economy,
29(1):13–40, 1998.
[10]
PCBS. Changing Banking for Good. Technical report, The
Parliamentary Commission on Banking Standards, 2013.
[11]
Y. Rav. A critique of a formalist-mechanist version of the
justification of arguments in mathematicians’ proof practices.
Philosophia
Mathematica,
15(3):291–320, 2007.
[12]
C. R. Schrader. History
of Operations Research in the United States Army, Volume
I: 1942–1962.
U. S. Government Printing Office, 2006.
[13]
I. Stewart. Bye–Bye Bourbaki: Paradigm shifts in mathematics.
The Mathematical
Gazette,
79(486):496–498, 1995.
[14]
E. R. Weintraub. How
Economics Became a Mathematical Science.
Duke University Press, 2002.