Friday, 4 April 2014

The Legitimacy of High Frequency Trading

Mark Thoma brought my attention to a post by Dean Baker, High Speed Trading and Slow-Witted Economic Policy.  High Frequency Trading, or more generically Computer Based Trading, is proving problematic because it is a general term involving a variety of different techniques, some of which appear uncontroversial, others appear very dubious.

For example, a technique I would consider legitimate derives from Robert Almgren and Neil Chriss' work on optimal order execution: how do you structure a large trade such that it has minimal negative price impact and low transaction costs.  There are firms that now specialise in performing these trades on behalf of institutions and I don't think there is an issue with how they innovate in order to generate profits.

The technique that is most widely regarded as illegitimate is order, or quote, stuffing.  The technique involves placing orders and within a tenth of a second or less, cancelling them if they are not executed.  I suspect this is the process that Baker refers to that enables HFTs to 'front run' the market.  Baker regards the process as illegitimate with the argument that
The issue here is that people are earning large amounts of money by using sophisticated computers to beat the market. This is effectively a form of insider trading. Pure insider trading, for example trading based on the CEO giving advance knowledge of better than expected profits, is illegal. The reason is that it rewards people for doing nothing productive at the expense of honest investors.
 On the other hand, there are people who make large amounts of money by doing good research to get ahead of the market. ... The gains to the economy may not in all cases be equal to the private gains to these traders, but at least they are providing some service.
By contrast, the front-running high speed trader, like the inside trader, is providing no information to the market. They are causing the price of stocks to adjust milliseconds more quickly than would otherwise be the case. It is implausible that this can provide any benefit to the economy. This is simply siphoning off money at the expense of other actors in the market.
The problem I have with Baker's argument is that I do not think it is robust.  It starts by suggesting a link between insider trading and HFT.  I don't think this holds up.  When a trade is placed on an exchange, it becomes public information.  The HFTs are making their profits by responding more quickly to the information, not because they are working on private information.  Baker distinguishes one sort of  'research', traditional economic research, from another, novel research on computer networks and algorithms, and implies that traditional research has a legitimacy in market exchange that computer research does not.

Statements like "simply siphoning off money at the expense of other actors in the market" make me a bit uneasy because they create distinctions between 'legitimate' and 'illegitimate' activity without offering a clear basis for the distinction.  For me, the distinction Baker makes seems to be on the intellectual basis of the agents: in economics or computer sciences.  I worry that the foundation of Baker's criticism is an affinity with institutional investors and a distaste for small scale entrepreneurs.

Baker's solution of  "A modest tax on financial transactions [that] would make this sort of rapid trading unprofitable" is, if my basic economic understanding is correct, a standard way incumbents create barriers to new entrants.  Wall Street, according to Jonathan Levy's Freaks of Fortune, has at least a hundred year tradition of lobbying legislatures to protect its interests and I think we should be wary of whether Wall Street's interests are aligned to the broader public.

The problem is somewhat more serious in the UK.  In 2012 the UK's Government Office of Science reviewed Computer Based Trading technologies and decided that, while acknowledging that order stuffing was dubious, they would not suggest inhibiting it.  The rational was that the market place was a competitive arena and that traders would congregate at exchanges that enabled competition; i.e. for the UK to retain its position as a financial centre the UK government should not legislate on the issue.

The substantive question is whether I can come up with a more robust argument than Baker's, and I offer an argument at the bottom of this piece.

I have been critical of the Foresight report.  However I have also been concious that I could not coherently justify my objections to practices such as order stuffing.  This concern was related to my uneasiness around identifying the concept of reciprocity being embedded in contemporary financial mathematics.  I have come from a fairly orthodox background and connecting mathematics and ethics was a problem for me since I first identified a link around 2010.

For me, the intellectual resolution of the problem of linking mathematics and ethics comes from pragmatic philosophy.  Pragmatism is especially relevant to finance because it addresses the thorny issue of truth when we cannot rely on objectivity, neutrality and determinism and because it acknowledges the role of ethics in science. Specifically, by rejecting the ideology of the fact/value dichotomy, I claim that  the principle of ‘no arbitrage’ in pricing contingent claims is infused with the moral concept of fairness.  This is all well and good, but the claim can be treated as a heuristic (as the Dutch Book argument is) or as a fact.  Based on the empirical evidence of the Ultimatum Game, I claim it is a fact that reciprocity is embedded in financial mathematics.  This raises the question of why is reciprocity important.

As well as justifying the connection between ethics and mathematics,pragmatism provides an explanatory hypothesis.  one problem I grappled with was why did the link between reciprocity and finance become obscured between the eighteenth and twenty-first centuries.  The explanation comes in the theories developed in Adorno and Horkenheimer's Dialectic of Enlightenment  or Polyani's The Great Transformation, both published in 1944. The Dialectic claims that the Enlightenment led to the objectification of nature and its mathematisation, which in turn leads to ‘instrumental mindsets’ that seek to optimally achieve predetermined ends in the context of an underlying need to control external events. Jurgen Habermas responded to the Dialectic in  Structural Transformation of the Public Sphere where he argues that during the seventeenth and eighteenth centuries public spaces emerged, the public sphere, which facilitated rational discussion that sought the truth in support of the public good. In the nineteenth century mass circulation mechanisms came to dominate the public sphere and these were controlled by private interests. As a consequence, the public became consumers of news and information rather than creators of a consensus through engagement with information. Having undertaken this analysis of the contemporary state of affairs, Habermas sought to describe how the ideal of the Enlightenment public sphere could be enacted in the more complex contemporary (pre-internet) society and his Theory of Communicative Action was the result.

Central to Communicative Action is a rejection of the dominant philosophical paradigm, the ‘philosophy of consciousness’ that is rooted in Cartesian dualism; the separation of mind-body, subject-object, concepts and is characterised by Foundationalism; philosophy is required in order to demonstrate the validity of science and the validity of science is based on empiricism, and certain views specific to the social sciences; such as that society is based on individuals (atoms) interacting, so that society is posterior to individuals and that society (a material, extending the physical metaphor) can be studied as a unitary whole, not as an aggregate of individuals.

The dominant paradigm sees language as being made up of statements that are either true or false and complex statements are valid if they can be deduced from true primitive statements. This approach is exemplified in the standard mathematical technique of axiom-theorem-proof. Habermas replaces this paradigm with one that rests on a Pragmatic theory of meaning that shifts the focus from what language says (bears truth) to what it does. Specifically, Habermas sees the function of language as being to enable different people to come to a shared understanding and achieve a consensus, this is defined as discourse. Because discourse is based on making a claim, the claim being challenged and then justified, discourse needs to be governed by rules, or norms. The most basic rules are logical and semantic, on top of these are norms governing procedure, such as sincerity and accountability, and finally there are norms to ensure that discourse is not subject to coercion or skewed by inequality.

I have come to the conclusion that markets are centres of communicative action enabled by the language of mathematics.  In this framework reciprocity is a norm of communication, but it is not the only norm.  Habermas emphasises the importance of sincerity in communication in general, and the implication is that it is required in markets.

It is on this basis that I believe we can identify order stuffing as illegitimate: it is insincere.  The difference between optimal order execution strategies, which earn their computer scientist experts money, and order stuffing is that a HFT order stuffing is not "sincere" in issuing an order they immediately cancel.  The antidote is not to impose an additional cost on transactions, that would not affect institutional investors but might hinder legitimate speculation and innovation, but to regulate the timing of order cancellations: order stuffing would not be possible if orders had to remain on the book for a few minutes.

Monday, 31 March 2014

Why do we take physicists seriously?

My undergraduate degree was in physics, at Imperial College which is/was the centre of logical positivist applied physics: when students went to Oxford to do a PhD in physics we thought they were "dropping out".  Today most of my non local academic interactions are with sociologists and theologians, the younger me would think I had dropped out, fallen through the floor and turned into a homeless drunk ranting incoherently and beyond help.  Things change.

I have recently had a couple of lunches with Paolo Quattrone, a finance professor, and Michael Northcott, a theologian.  The focus of our conversations have been around representing value (values).  My involvement is centred on my interest in the nature of mathematics.  Specifically, is financial mathematics a "truth-bearer" or is it a mechanism of discourse.  The dominant philosophical paradigm sees language as being made up of statements that are either true or false and complex statements are valid if they can be deduced from true primitive statements. This approach is exemplified in the standard mathematical technique of axiom-theorem-proof. Habermas, in the Theory of Communicative Action,  replaces this paradigm with one that rests on a Pragmatic theory of meaning that shifts the focus from what language says (true or false) to what it does. Specifically, Habermas sees the function of language as being to enable different people to come to a shared understanding and achieve a consensus, this is defined as discourse. Because discourse is based on making a claim, the claim being challenged and then justified, discourse needs to be governed by rules, or norms. The most basic rules are logical and semantic, on top of these are norms governing procedure, such as sincerity and accountability, and finally there are norms to ensure that discourse is not subject to coercion or skewed by inequality.  I have come to believe that reciprocity is important in financial mathematics because it is a norm that enables market discourse which seeks the truth (consensus) on value rather than it determining what is true.

Paolo is interested in how company accounts are used.  My understanding of his position is that contemporary accounts are presented as a representation of truth, but their genesis was as focuses of reflection: you accounted for your actions. This was exemplified when I recently started reading Neal Stephenson's Baroque Cycle (a trilogy I strongly recommend - it covers some of the same themes as this blog but with more pirates and sex, what more could you want?) where Stephenson describes Isaac Newton writing down, accounting for, his sins one night.  When I first read this section it passed me by, but Paolo has enlightened me as to the depth of Stephenson's story.

When we were having lunch, Paolo and Michael were discussing the fact that while today accounts are "annual" the original accounts were an "open book", they never "closed" the account.  Michael is interested in this because it represents a theological conception of time that impacts on ethics.  Specifically, modern business practice, built on science, rests on the distinction between now and then, here and there, a distinction that does not exist for believers in a transcendental deity.

I was immediately interested in Michael's position, which motivates his interest in environmental issues, because it is very different from mine.  The topic that most interested me at school was entropy and how we know time moves forward: because things become disordered.  As a teenager I justified my messy bedroom to my mother as a consequence of the indubitable laws of physics.  Interest is charged because the lender is uncertain if the borrower will repay in the future: time exists in finance because there is uncertainty. Believers in a transcendental deity see uncertainty as a subjective problem, the deity does not experience it.  I think this could explain why mathematical probability was developed by Augustinians (Calvinists/Jansenists) such as: Pascal, Huygens, Bernoulli, Montmort, de Moivre and Bayes; and not by Anglicans (Newton) or Lutherans (Leibnitz).  Augustinians believed that, like time and space, there was/is an absolute measure of chance.

While modern physics accommodates uncertainty it does so in a statistical context.  for example, Poincaré's recurrence theorem theorem states that any bounded system (i.e. a bounded universe) will eventually return to its original state, and because the laws of physics are deterministic, it will repeat its evolution: while a gas looks random, it can be considered a (statistically) deterministic system.  A simple solution to this paradox is if we introduce 'radical uncertainty' and consider the universe as a non-ergodic system.  The fact that no one thinks this is the obvious solution probably indicates it will open a much nastier can of worms for physics.

Noah Smith has argued that 
Physics intuition is all about symmetry, and about finding elegant (i.e. easy) ways to solve tough-seeming systems. In econ, that rarely matters at all; the intuition is all about imagining human behavior.
This is all true and also explains why physics intuition is often unhelpful in an economic setting.  When physicists talk about symmetry they are talking about something being invariant under transformation, i.e. there is something unchanging in nature that can be fixed upon and there is something being conserved (Noether's Theorem).  The issues I have with economics adopting physicists intuition will evaporate when someone can identify, and justify, what it is in economics that is invariant: what quantity is being conserved.  The reason why economists focus on human nature is because human nature is inconstant.

Time is important in theology, finance and physics. I don't understand, or even know, the details of what the current consensus on time in physics is, but my intuition is that time in physics does not generally have a direction, and while there is a thermodynamic arrow of time (entropy; modulo recurrence) at the quantum level, dominated by uncertainty, time is symmetric.

Where I have a real problem with physics is in the area of multiverses, particularly the many-worlds interpretation of quantum mechanics.  As far as I can tell, the many worlds interpretation exists because physicists don't like the idea that when a wave function collapses it collapses simultaneously across the universe, information is transmitted at a speed faster than that of light.  We mock medeival scholastics for having (apparently) argued about how many angels could dance on the head of a pin: yet we take physicists and their multiworlds, employed to address a technical issue internal to physics, seriously.  My issue is that they appear to resolve their problem, of having to deal with a probability distribution, by replacing a difficult issue related to time and radical uncertainty with an "ensemble" interpretation.  My frustration with Ole Peters is because physicists believes in the ensemble approach to uncertainty (that there exist an infinite collection of paths and the universe is on one of these paths) and then suddenly realise it is a bit meaningless in economics.

In the aftermath of the discovery of evidence for gravitational waves the theologian Giles Fraser has argued that science is becoming like religion: it argues that asking what came before the Big Bang is a non-question, just as monotheists argue the question who created God is a non-question.  A response from the physicist Jon Butterworth is that physicists deal with the nature of the universe while theologians address the meaning: a fact/value dichotomy.  While Butterworth acknowledges that there is an interplay between fact and value, I actually think the interplay is far more important than Butterwoth implies.  My case in point, time, clearly demonstrates this.  The problem Michael, theoretical physicists and I have with time is that, for me at least, there is no clear demarcation between the nature and meaning of time.  In fact the nature and meaning of time could well be ambiguous, even within physics, and scientific integrity demands we take this possibility seriously.

Thursday, 27 March 2014

The Republic of Science

I do all my teaching in the Spring and am finding I have no time to blog or tweet.  This has given me some insight into my old tutorial colleague Felicity Mellor's thesis about scientists keeping quiet, since my social media silence has enabled me to fix some ideas.

When I returned to Twitter, apart from being amazed at the continued quantity and quality of Arthur Charpentier's tweets, I read an article in Nature by Roger Pielke Jnr celebrating 75 years of J. D. Bernal's The Social Function of Science.  I had never heard of the author or the book, but what caught my attention, following my spring hiatus, was the following passage
Bernal's great intellectual adversary was Michael Polanyi, a Hungarian chemist who was opposed to Soviet ideals. Polanyi's classic 1962 journal article 'The Republic of Science: Its Political and Economic Theory' (Minerva 1, 54–73; 1962) posits that individual scientists pursuing truth led to the most efficient social outcomes. The parallels with Adam Smith's “invisible hand” guiding capitalist economies could not have been accidental.
 Pielke creates a demarcation between socialist and capitalist stances on science, and I sense he uses Bernal to characterise socialist science with Polanyi characterising capitalist science.  Pielke's penultimate paragraph is
Although Bernal lost the intellectual battle over cold-war politics, his ideas on the social function of science have triumphed on nearly every count. The larger and more significant effect of The Social Function of Science has been to anticipate and help the ideal of 'pure science' to reach mythical status, ushering in an era of science focused on societal needs, today characterized as 'grand challenges' by scientists and politicians.
Passing over the fact that , as a mathematician I do not associate 'pure science' (pure mathematics) with societal needs, I think Pielke's analysis is simplistic.

 Michael Polanyi was the brother of the economic historian Karl Polyani.  Karl's reputation was built on his book The Great Transformation that described how the modern (British) capitalist system emerged in the nineteenth century.  Karl was not opposed to socialism,  The Great Transformation was based on lectures given  for the Workers' Educational Association and his marriage to a card carrying communist prevented him living in the US when he was at Columbia.  I am interested in Karl's work because he considers the decline of reciprocity in exchange in The Great Transformation, a theme of interest to me.  The association of Adam Smith with 'capitalism' is modern, Smith was interested in the process of commercial exchange, which he studied in the context of Aristoelian ethics: reciprocity/fairness.  Smith distinguished commercial exchange from altruism on the basis that it was fair exchange, as Aristotle observed "there is no giving in exchange", but just as there is no charity there is no theft in commercial exchange.

Our contemporary understanding of Smith is clouded by the competitive metaphors that emerged in the early nineteenth century.  When Michael Polanyi  speaks of "free cooperation of individual scientists"  I would suggest that he considers exchange in the context of the doux-commerce thesis that dominated the Enlightenment.  A 1704 technical text on commerce argues “Commerce attaches [men] to one another through mutual utility”; while in The Rights of Man (1792) Thomas Paine writes “commerce is a pacific system, operating to cordialise mankind”. In the intervening years Montesquieu, Hume, Condorcet and Adam Smith all agreed that commerce was a powerful civilising agent, promoting honesty, industriousness, probity, punctuality, and frugality, in contrast to the excesses of absolute monarchies of the preceding centuries.  When Michael   speaks of The Republic of Science I would argue that he is thinking of Res publica, 'the public affair'.  So for me, Polyani is concerned with the role of science in the life of the polis.

A contemporary of the Polanyis and Bernal was Franz Borkenau, another Marxist but unlike Bernal a Marxist who was opposed to authoritarianism.  Borkenau is important to me because he first identified the origins of western science in the scholastic analysis of exchange in The Transition from the Feudal to the Bourgeois World View (1934).  I think Borkenau is relevant to this discussion because he encapsulates the point that the demarcation is not between socialist and capitalist stances on science but between democratic and authoritarian stances on science.  Unfortunately, Bernal appears to sit on the 'authoritarian' side of the fence.

This is significant in my spring hiatus which has been dominated by reading Cheryl Misak's Truth, Politics, Morality: Pragmatism and deliberation, which was recommended to me by Matthew Festenstein in response to his reading my paper on reciprocity (revised in light of this reading).  Misak's theme is “Why must we value cooperation and equality” in politics and argues we can only be sure of the validity of our beliefs by putting them up for criticism and offering reasons to justify them. This is an epistemic argument that she claims justifies democracy: if in politics we seek the best policy we must allow our decisions to be challenged and be in a position to defend the decisions without resorting to authority; this requires that we are cooperative. In my mind, there is a correspondence between Polanyi's and Misak's cases and it is opposition to Bernal's (apparent) endorsement of authoritarian science.

I am no expert, but I doubt the claim that The Social Function of Science has ushered in an era of science focused on societal needs, today characterized as 'grand challenges' by scientists and politicians.  The basis of my doubt is my intuition that Vannevar Bush and other mid-twentieth century US science policy makers were guided by the Pragmatism of American democrats like Peirce, James and Dewey, and not those of a British communist.  I have two criticisms of Pielke's article, in associating contemporary science with socialism he is asking for trouble, when he could more meaningfully associate it with democracy. Secondly, I am a bit queasy about Pielke's "Pure Scientist" who confines themselves to presenting the current state of knowledge.  As a Pragmatist, following Susan Haack,  I cannot accept that scientists can be so, 'Pure', and objective, and to suggest they can is disingenuous.  I do not think that Pielke would be happy with me suggesting that his model implies a Pure Scientist is more authoritative than others, as Brian Cox advocates, but I also think the road to hell is paved with good intentions.

Anyway, Felicity, Roger and I are all speaking at the Circling the square: Research, politics, media and impact conference in May, so maybe we can discuss all this over a  Burton Ale?

Wednesday, 8 January 2014

Can financial economics learn from Dancing with the Stars?

Carola Binder (@cconces) brought my attention to a paper by Paul Rubin that was his presidential address to the Southern Economic Association.  What caught my attention was the paper is concerned with fear of markets (he coins the term 'emporiophobia' - fear of merchants or markets) that Prof Rubin associates with the emphasis economics places on competition rather than cooperation.  These themes resonate with me, since I believe that central to the Fundamental Theorem of Asset Pricing, the key theory of Financial Mathematics, is the concept of reciprocity, and this reflects a profound relationship between mathematics, markets and ethics.  In particular, I hold a somewhat perverse view that it is not markets that degrade public morality, but society degrades commercial morality.  In my mind, Rubin's paper holds implicit support for my views.

Rubin observes that economics adopts the competition metaphor from sport, citing Deirdre McCloskey who is important for me because she introduced me to contemporary Virtue Ethics.  This struck me as significant because I do not encourage my five year old son, who is showing all the sporting feebleness of his father, to participate in sport because I think sport is concerned, primarily, with competition.

An aspect that Virtue Ethics and Pragmatism share is that there are 'goods internal to practices',  here 'goods' do not mean 'commodities', but things that are 'good'.  MacIntyre explains (p 188) that if a child is encouraged to play chess by offering them sweets if they win, they will see chess as an avenue to sweets and fail to recognise the 'goods internal' to the practice of chess (developing a particular analytic skill).  As a result they will see no fault in cheating, in order to receive sweets.  The ethical problem is the child is focusing on goods external (sweets) to the practice of chess.

When Prof Rubin states
Sports are specifically competitive; the entire purpose of the activity is to determine the winner.
he is, in my opinion, wrong.  For me there are more interesting  internal goods in sports. I encourage my son to play sport because it helps him socialise and understand teamwork; it helps his physical development; as he gets older pursuit of excellence in sport develops his ability to achieve objectives, something that is at the heart of Strictly Come Dancing (Dancing with the Stars).  Could the economist Paul Rubin explain why hundreds of thousands sports fans spend money supporting teams that are unlikely to win anything, if the essence of sport is about competition and winning.  In fact we are disgusted when the 'external good' of winning comes to dominate sport: Lance Armstrong.  The financial corollary is, and on this point I believe Prof Rubin and I are in agreement, is that the internal good of markets is in creating ties that bind (enabling exchange), the external good is making money.

What I think Prof Rubin highlights is MacIntyre's 'disquieting suggestion' that modern moral philosophy has become so separated from ethics that it is incapable of making moral judgments.  Rubin attempts to defend markets by claiming the error originates in adopting  sporting metaphor.  I would argue the error is in failing to appreciate the goods internal to practices, which is because there is an emphasis on the consequences of actions, rather than their character. Both Virtue Ethics and Pragmatism identify current problems emerging during the Enlightenment, Pragmatism objects to Cartesian Representaionalism while MacIntyre sees broader issues emerging out of Individualism and Autonomy.  These observations challenge what it is to be a contemporary scientist.

While I agree with most of MacIntyre's argument, we would probably not agree on economic issues, and in fact I do diverge with many who advocate applying Virtue Ethics to transform contemporary financial practice and the point of contention is on the significance of scarcity.

The earliest Greek text that addresses economic questions was Works and Days written by the  poet Hesiod  around 750 BCE, pre-dating the scientist-philosopher Thales buy some one hundred years.  The poem starts with the myth of Pandora, relating the strife of mankind to the introduction of technology and goes on to identify the  role that scarcity has in determining human behaviour -- the gods make life a struggle for mankind, and so we have to work hard.  The archaeological record for the Aegean at this time  suggests that climate change led to  a long running famine  and there was migration from the main city-states into new areas for farming.  Hesiod's father had been a farmer and merchant in north-western Anatolia, but the recession had forced him  to move to Boeotia in modern Greece. The contemporaneous Biblical story of the Fall is similar, after eating from the tree of knowledge humans are expelled from the Garden of Eden.  In the British Isles We see evidence of extensive trade networks  with the large quantities of axe-heads that seem to have been used in commerce, compared with the forts that characterise the Iron Age, centres of control over resources.

Following the sack of Rome by the Goths in 410 CE, Augustine of Hippo wrote The City of God against the Pagans.  The book
 Censures the pagans, who attributed the calamities of the world, and especially the recent sack of Rome by the Goths, to the Christian religion, and its prohibition of the worship of the gods
The pagans were arguing that the destruction of Rome, a disastrous event that impoverished the city, was a consequence of Christianity. Augustine countered, and laid the foundations of the Catholic Church,  by offering a Platonic argument that the corporeal world was an unnatural state for the Christians, who would eventually return to The City of God.  To counter Pelagian and Donatist heresies Augustine  argued against free-will and while he agreed with the Stoics about the  uselessness of divination and astrology in predicting the future,  he disagreed with the pagan view that God had no foreknowledge
 God knows all things before they come to pass, and that we do by our free will whatsoever we know and feel to be done by us only because we will it . ...  all wills also are subject [to God], since they have no power except what He has bestowed upon them.
In his book  On free choice of the will Augustine employs  mathematical analogies to convince his audience that there are transcendental truths
For those inquirers to whom God has given the ability ... [can see that] the order and truth of numbers has nothing to do with the senses of the body, but that it does exist, complete and immutable, and can be seen in common by everyone who uses reason
Augustine is incorporating the immutability and indubitablity of mathematics as Catholic dogma.

Between 1000--1300 CE Europe underwent a period of economic expansion that had saw the population double while the amount of coin in circulation increased six-fold. Driving the   expansion was a rise in temperature that led to greater agricultural productivity, population growth and after the late twelfth century,  an expansion of commerce, centred on the Mediterranean, and an urban explosion.
Moses ben Maimon (Maimonides) argued in  his  Guide for the Perplexed, written in 1190 and an influence on Aquinas, that God's punishment after the Fall of Man  was not so much about scarcity but uncertainty. In the Garden of Eden humans had perfect knowledge, which was lost with the Fall, and it is the loss of this  knowledge which is at the root of suffering - if we know what will happen we can manage scarcity.  The thirteenth century saw the ultimate flowering of European thought that laid the foundations for the scientific revolution of the seventeenth century.

The great expansion of Latin Christianity  went into reverse at the start of the fourteenth century.    Between 1315 and 1322 abnormally cold winters were separated by abnormally wet or dry summers, harvests failed  leading to the Great Famine in which  about 10% of the population died in some urban areas.  These famines were followed in 1347 by the Black Death, and by 1350  the plague had spread throughout Europe, causing the death of, in places, a third of the population, hitting the poor and urban communities particularly hard.  The plague was not a single event, but reappeared, and the successive calamities of the 1320s, 1350s and then 1360s would have traumatised society. The influence of the Church suffered  as clerics were disproportionately infected by plague, since they cared for the sick, and it proved unable to protect people from the disease.  The Scholastics, using either faith or reason, were unable to explain what was happening around them, and in the words of the historian William Bouwsma, they failed to "give to life a measure of reliability and thus reduce, even if it cannot altogether abolish, life's ultimate and terrifying uncertainties''.

After Latin Christianity lost its empire in the Eastern Mediterranean the Crusades were replaced by dynastic conflicts.    While England and France had strong monarchies, fighting each other in the Hundred Years War between 1337 and 1453, the Holy Roman Empire and Italy became fragmented.  Merchant cities on the Atlantic and Baltic coasts  exerted their independence from German princes, who themselves had vast estates and were capable of challenging the Emperor. The cities of northern Italy became dominated by signori while the Papacy and the Kingdom of Naples, which had driven European culture in the twelfth century, became a pawn for the French and Spanish/Hapsburg monarchies.

The agricultural collapse in northern Europe and disruptive wars led to a collapse in commerce, particularly hard hit were the Flemish towns, who had emerged as centres of trade following Philip IV the Fair's suppression of the Champagne Fairs between 1305 and 1309.  The primary concern of the autocrats was the maintenance of their power, power which was based on armies manned by mercenaries that needed to be paid: Pecunia nervus belli, `money is the sinew of war'. The subtleties of Scholastic economic analysis would be replaced by demand for scarce gold and silver.

By the start of the sixteenth century the scarcity associated with the previous centuries was replaced by uncertainty.  The anxiety of the fourteenth and fifteenth centuries resulted in Calvin's revival of strict Augustinian theology which proved popular among mercantile communities that were growing as  wealth and power moved from the Mediterranean to the Atlantic following the discovery of the Americas. Confessional wars erupted, in France, the Netherlands, Central Europe and the British Isles. The  Dutch Wars, initially stimulated by high Hapsburg taxation resulted in the rebels paying a higher tax burden but  significant developments in public finance  enabled the Dutch to successfully fund their rebellion.  Key in this process was Simon Stevin, the abaco trained mathematician who established the Dutch Mathematical School that inspired Descartes. The Thirty Years War, characterised by bellum se ipse alet, `let war pay for itself',  resulted in widespread devastation, but the 'Gothic atrocity narrative' that emerged during Romanticism, was re-evaluated in the twentieth century, and  contemporary historians argue that "Sudden changes in fortune became a defining characteristic of the conflict'' .

In England 1665 saw Plague; 1666 the Great Fire; 1672 The Great Stop of the Exchequer;   1673 an Act of Common Council that looked to put an end to "usurious contracts, false Chevelance, and other crafty deceits'' ; an unsuccessful revolution in 1685, a successful one in 1688; there was a stock-market boom in the early 1690s, with around 40% of the trades between 1692 and 1695 being in stock options; 1694 saw the The Million Adventure Lottery and the creation of the Bank of England; in 1696 was the re-coining, precipitating what has been described as "the gravest economic crisis of the century''   that was partially resolved by Isaac Newton; 1697 saw the first legislation "To Restrain the number and ill Practice of Brokers and stock-jobbers'' being passed.  The  turbulence of these thirty-two years see the creation of the City of London that is familiar today and laid the foundations of Britain's Empires over the next three hundred years.

The frequentist approach to probability began to dominate the ethical treatment of probability following the claimed defeat, or taming,  of chance by mathematics  with the publication of Montmort's Analytical Essay on  Games of Chance  of 1708  and De Moivre's  The Measurement of Chance, of 1711 developed in The Doctrine of Chances of 1718. All these texts were developed more in the context of gaming than in the analysis of commercial contracts, which had been the focus of the work of Pascal, Huygens and Bernoulli. The Doctrine was the more influential, introducing the Central Limit Theorem, that  independent random variables will be Normally distributed, a result that can be used to argue that asset prices should be log-Normally distributed, as in the Black-Scholes-Merton model.  In addition, in 1735 it was being argued that the achievements of the seventeenth century probabilists from Huygens to  de Moivre superseded the classical approach to probability, dividing events into the certain, probable and unpredictable,  in that it measured probability and changed the status of the `unpredictable'.

It is remarkable that the development of mathematical probability was undertaken almost exclusively by Augustinians: Pascal was a Jansenist; Huygens, Bernoulli and de Moivre were Calvinists;  Montmort had been trained to be an Augustinian but renounced orders to marry.  This observation is compounded by the facts that Newton was an Arian/Anglican and Leibniz a Lutheran and neither did significant work in probability, Fermat was a Catholic living in the mixed Calvinist/Catholic city of Toulouse. As Augustinians the probabilists all believed in God's pre-destination  and omniscience: they denied the existence of randomness, events  were unpredictable because man could not understand God's intentions.  Over the course of the next hundred years the implicit determinism of the Augustinians became a standard feature of Western science, being codified by Laplace in the 1820s.

From the mid-eighteenth century Romanticism, emphasising an individual genius' emotional reaction to nature, emerged to eclipse the Classical rationalism of the previous century.  By the end of the  century, Thomas Malthus captured the   anxiety  of the  English rural middle-class following the Terror of the French Revolution  in An Essay on the Principle of Population that focused on scarcity.

 At the height of Romanticism in 1836 John Stuart Mill defined political economy as being
 concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging the comparative efficacy of means for obtaining that end.
and  defended Malthus in Principles of Political Economy of 1848, written at a time when Europe was struck by the Cholera pandemic of 1829-1851 and the famines of 1845-1851. Alfred Marshall, synthesised Mill's approach to economics  with Darwinian  metaphors of competition to lay the foundations of  neo-classical economics.  Marshall's 1890 definition of economics  would be paired down by Lionel Robbins in 1932 as "the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses''.

In the years before the First World War economic policy, particularly interest rates that governed credit,  was dominated by the gold-standard, while after the Second World War it was Bretton-Woods.  The collapse of Bretton-Woods in August 1972 had an immediate impact on interest-rates.  In the 27 years between 1945 and autumn 1972, the Bank of England changed its lending rate 43 times, in the 27 years after 1972, it changed them 223 times.  A key economic factor had gone from being fairly stable to being a random process.  Similarly, as the US dollar fell in value, price setting mechanisms in commodities, notably oil, collapsed.  It was in this environment that financial derivatives made a re-appearance after having been dormant for half a century.

The first academic paper by the mathematician Kolmogorov had been in economic history, when he submitted it he was told that "You have supplied one proof of your thesis, and in the mathematics that you study this would perhaps suffice, but we historians prefer to have at least ten proofs.''.  The purpose of the preceding narrative is not to prove a historical fact but to demonstrate that there is an ebb an flow between society being concerned with scarcity or with uncertainty, a distinction  made explicitly by ben Maimion.

I suggest that when faced with scarcity, society responds by focusing on individuality and acquisition while when society is challenged by uncertainty it turns to communality, seeking to diversify risks.  Aristotlean Virtue Ethics have flourished during periods of growth and exapansion and have declined during periods of scarcity.  this explains the rise of utility maximisation as dealing with scarcity in a Romantic context of the individual genius struggling against nature, can be explained, but I propose that since the 1960s society has been focussed more on uncertainty than scarcity and is struggling to shift the economic paradigm in response to the change environment.

This opens up a debate as to whether we face uncertainty or scarcity at the moment, but I see the obsession with scarcity, and the associated fear of change, as a degenerate state.  People enjoy Strictly/Dancing with the Stars because it tells a tale of heroic and optimistic pursuit of excellence, and I think this is our natural state.

Tuesday, 31 December 2013

New Year's Resolutions

What have I learnt in 2013 and what do I hope to achieve in 2014?

This time last year I was suffering from persistent pain just under my ribs on the right hand side that was initially diagnosed as gallstones on 2 January, the presence of stones was confirmed by an ultrasound later in the month.  I then went through two more months of pain because the NHS consultant did not believe I had gallstones, statistically I was too young and not fat enough to be troubled by stones, ignoring the prior probability was not being updated with the information that I had pain centered on my gall-bladder.  Eventually I was admitted to hospital with jaundice and an infected gall-bladder.   A few months later my father-in-law was diagnosed with depression, in September, following mild paralysis they realised his mood change was a consequence of a brain-tumor, my father-in-law died on 26 December.   I learnt, as much of the British public have come to realise, the NHS is not as perfect as we like to think.

While in hospital, on morphine, it struck me that I should look into Pragmatism.  Donald MacKenzie had mentioned Pragmatism in 2012 and I was aware Poincare was linked to the philosophy, but I did not really understand it  why should I as a mathematician?  I spent the 8 weeks recovering and then most on June-July reading up on the topic.  Pragmatism enabled me, as an atheist, to reconcile Virtue Ethics with Financial Economics and I drafted my paper Reciprocity as the Foundation of Financial Economics.  This was a significant silver lining to the cloud of being in pain for three months.

In August I met Brett Scott (@Suitpossum) who, like me, was speaking at the Edinburgh Fringe's Cabaret of Dangeous Ideas.  It was good to meet Brett, who I admire greatly.  He was able to make explicit (for me) that there was an issue around mathematics obscuring, rather than enlightening, finance, and since we met this has become an increasingly important issue for me.  I also came to believe that, while Brett and I agree on many issues, our difference is he has a fundamental concern with scarcity while I have a fundamental concern with uncertainty.  Later in the month I was interviewed by David Fergusson at the Cabaret of Dangeous Ideas.  It was helpful for me that David saw some merit in the work I was doing, and we might collaborate on the relationship between science and religion in the future.  David recommended that I read After Virtue, which I am still in the process of completing, but I have read the "students' guide".  Following on from my meeting with David I met Paolo Quattrone and Michael Northcott and discussed issues relating to re-orienting finance.  Michael, as an Episcopalian, made some comments about how time is irrelevant in the Christian context but dominates finance, this seemed to link to Brett's views rooted in deep ecology.  I have thought a lot in the past about the relationship between randomness and time.

As well as these face-to-face interactions, Arthur Charpentier (@freakonometrics) is my modal "Favorite" on Twitter while Noah Smith (@Noahpinion) has prompted many of my blog posts.  Thanks go to Jon Harris (@jonone100) and Dave Marsay for useful comments on my blog and thanks to Mark Thoma (@MarkThoma / economistsview.typepad.com) for disseminating my posts.

In the latter quarter of the year, with the REF mayhem out of the way, I began to look forward to where my research should focus.  In April the IMA conference on Mathematics in Finance had taken place, with me as the (ill) Chair of the organising committee.  The Bank of England had provided input into the organisation of the meeting and they had highlighted the need for mathematicians to shift their perspective away from stochastic calculus and re-focus from micro- to macro-economic issues, as in Size and complexity in model financial systems, and address the concerns that would be identified in para. 89 of  v. II of Changing Banking for Good, that mathematics aids 'insincerity' in financial practice.  A bit later, around November 2012, Kenneth Lloyd, a software engineer responded to my piece Ethics and Finance: The Role of Mathematics and asked whether I had ever considered modelling financial networks based on reciprocity and profit maximisation and work out which would be "better".  Finally I have been following the emergent phenomena of peer-to-peer lending and crowdfunding, being an investor this year  in Harlaw Hydro, a crowdfunded community energy project.

As a result of these interactions I will be looking to follow up on Kenneth's suggestion and is to investigate whether financial systems are more or less resilient and effective if based on different commercial cultures, e.g.: profit/loss sharing (Islamic musharakah), if loan interest is based purely on objective risk born by lender (Scholastic usury prohibition), if the interest rate is determined by the opportunity cost (market based) or if interest aims to maximise returns to the lender.  I intend to model financial systems as   graphs  and study how the different commercial cultures  affect the evolution of the graph topology, and then how  money/credit is transmitted on different  graph topologies. Effectiveness will be measured by the efficiency in enabling lending  and resilience by the ability of a financial network to withstand shocks generated by losses. This research question addresses an issue moral philosophy: what is the relationship between ethics and the structure of the polis and is motivated by themes in Pragmatic philosophy, in particular the hypothesis reciprocity emerges as a social norm to enable resilient and effective communities were exchange is important. The different commercial cultures can be seen as representing points on a spectrum of commercial attitudes and in cultures where there are low interest rates the hypothesis is that there will be greater homophily between the agents in the network, and this will create more resilient and effective financial networks.  The research will aim to inform regulators of any intrinsic merits with emerging financial mechanisms, particularly crowdfunding and peer-to-peer lending, particularly as there is concern that the regulators will stifle the democratisation of finance.

If anyone is interested in being involved in this project, please do get in touch.

Happy New Year!

Thursday, 19 December 2013

Is finance guided by good science or convincing magic?

Noah Smith posted a piece on "Freshwater vs. Saltwater" divides macro, but not finance.  As a mathematician I did not really understand the argument (a nice explanation is here) but there was a comment from Stephen Williamson that really caught my attention
Another thought. You've persisted with the view that when the science is crappy - whether because of bad data or some kind of bad equilibrium I guess - there is disagreement. ... What's at stake in finance? The flow of resources to finance people comes from Wall Street. All the Wall Street people care about is making money, so good science gets rewarded. I'm not saying that macroeconomic science is bad, only that there are plenty of opportunities for policymakers to be sold schlock macro pseudo-science.
 What I aim to do in this post is offer an explanation for the 'divide' in economics from the perspective of moral philosophy and on this basis argue that finance is not guided by science but by magic.

As a Scottish Marxist in the 1950s, Alasdair MacIntyre wanted to establish if Stalinism could be reasonably criticised without undermining Marxism.  In 1981 MacIntyre published his conclusion: modern moral philosophy was incapable of reasonably criticising anything because it had was dominated by arguments based on willpower.  MacIntyre argues (my naive interpretation) that society should focus on where it wants to get to, and then work out how to get there, and this is the Aristotelian approach that was abandoned in the nineteenth century.  The dominant philosophical approach is that science establishes principles and society is a deductive consequence of those principles.  The Nietzschean approach is to dominate the discussion of the principles, through willpower, and so determine the  direction that society travels in.

For example all (British) mainstream political parties believe in 'fairness', but we never discuss what a fair society actually looks like.  The debate focuses on the conflicting principles that fairness is founded on equal allocation of resources or that it is based on equal opportunity, these apparently similar principles lead to very different political principles.  The contemporary economic debate seems to be governed by the conflicting principles of whether you are pro-austerity or pro-deficit, not a discussion of what type of society people want and how, whether through tight or loose monetary policy, will this society be achieved.

I think their is a symbiotic relationship between the Nietzschean approach and academics.  If the focus of the debate switches to the democratic deliberation of what type a society people want, scientists become the servants of society.  In the current system, scientists become the guides of society.  MacIntyre's moral philosophy emasculates academics and it is not surprising many regard it as relativistic mumbo-jumbo.

Relativism is a problem if differing points of view are seen as exclusive options.  The solution to relativism is not to shrug your shoulders and accept the sacrifice of children is acceptable in a cultural context, it is to engage in deliberation that attempts to understand the reasons for the action and to reflect on where there are weaknesses in your own cultural norms.  The disagreement in macro-economics is an indication that economists see some benefit in the process of public deliberation.

The current political paradigm includes the principle that policy should be 'evidence based', reliant on 'science', without much thought to the how 'science' is constructed.  The problem with the economic debate is the discussion is focusing on the scientific questions not the democratic questions.

Most physical scientists will have given up a few paragraphs ago, because scientists are committed to the belief that they are guided by 'nature'
What these types of scientists generally refuse to acknowledge is that they choose which questions to answer.  In The Value of Science Poincare argued  against the idea popular today "science for science's sake" rather the scientist should concern themselves with identifying the hidden  connections between the apparent facts in the service of society (mathematics is there to do this when experimentation is not possible).  A good scientist is someone who asks the important questions.

The advantage that physical scientists have over social scientists is they have a significant degree of autonomy over what they study.   People believe that cosmology and particle physics are important because there are a lot of (historically) well funded cosmologists and particle physicists telling them that these are important and they close the discussion down by appealing to the almost divine authority of 'Nature'.  Brian Cox seems to really believe that he is guided by Nature in determining it is more important to look for the Higgs Boson than working out obtuse questions, such as "what is money". The result is the chattering classes have a better comprehension of quantum mechanics than the relatively straightforward financial system and the effect is that people are perplexed by financial crises and unable to formulate coherent responses to them.  Cox has said the money spent on the finding the Higgs Boson (around $13 billion, the accounts are not clear, a cynic might say that at this cost they were bound to prove the maths) was well spent in comparable to the £38 billion banking support - the point is the finding the particle does not contribute to mitigating financial disasters and the UK's share of the LHC funding, in retrospect, could have been better spent.

In the 1902  Marcel Mauss and Henri Hubert wrote in A General Theory of Magic
 The magician is a person who, through his gifts, his experience or through revelation, understands nature and natures... Owing to the fact that those magicians came to concern themselves with contagion, harmonies, oppositions, they stumbled across the idea of causality, which is no longer mystical even when it involves properties which are no way experimental
The two distinguish magic and science by observing that magic is based on belief in a set of rituals.  A person will only consult a magician if they  have faith in the actions that the magician will perform.  Science is not based on belief in its theorems, the equivalent of magic's rituals, but on a belief in the process by which science is created.  This is a subtle point, but the effect is that magic is necessarily static, a contemporary astrologer would have more authority if they claimed to be experts in ancient knowledge.  Similarly, most religions claim to encapsulate what is permanent in a changing world. On the other hand, science is necessarily  dynamic, we trust modern science's explanations of cosmology more than  those of the Babylonians.

The implication of this  distinction is that either mathematics exists independently of human thought and mathematicians discover theorems,  Platonism or `Mathematical Realism' and  mathematics is immutable, as Augustine claimed, or mathematics is created by living, breathing, mathematicians in response to the world around them.  The advantage of Platonism is that it provides scientists with a stable framework in which they can work, and is regarded as many scientists, such physicists Roger Penrose,  as an invaluable tool.  On the other hand, the implication of Anti-Platonism is that mathematics is dependent on society's attitudes, and its claims to certainty are as strong as the claims to certainty of the social sciences.

While magic and science are distinguished by static or dynamic belief,  Mauss and Hubert distinguish magic and religion by hidden and open belief
Where religious rites are performed openly, in full public view, magical rites are carried out in secret... and even if the magician has to work in public he makes an attempt to dissemble: his gestures become furtive and his words indistinct.
The suggestion is, for science to be reputable and maintain a divide with magic, it needs to be carried out, like religion, in the open.  As soon as either science or religion takes place out of the public arena, they risk degenerating into magic.  Today, many scientists, in particular social scientists, regard scientific knowledge as 'shared belief', not necessarily 'justified belief', science is less about 'truth' and more about 'consensus' with  an Italian definition of science
the speculative, agreed-upon inquiry which recognizes and distinguishes, defines and interprets reality and its various aspects and parts, on the basis of theoretical principles, models and methods rigorously cohering
Science is speculative, not certain, and agreed-upon, not secret.  It is on this basis that society can begin to understand the value of science.

So, in response to Stephen Williamson's implication, and it is an implication he does not make the statement,  that finance is more scientific I have two comments.  Firstly I agree that the focus of contemporary finance is about "making money" and this provides a clear objective for the discipline to work towards.  The question I would pose is "making money" a good internal to the practice of finance? I would argue that the good internal to finance is the effective distribution of money to fund economic activities.  The monomania of Wall Street  needs to be challenged and I wonder if a re-orienting of finance to focus on (my view of) its internal goods will result in such a 'scientific' finance.

The second issue is highlighted in the UK Parliament's report on banking.  The document makes few references to the role of mathematics in finance, but where it does it is damning
89. The Basel II international capital requirements regime allowed banks granted
“advanced status” by the regulator to use internal mathematical models to calculate the risk
weightings of assets on their balance sheets. Andy Haldane described this as being
equivalent to allowing banks to mark their own examination papers. A fog of complexity
enabled banks to con regulators about their risk exposures:
[...] unnecessary complexity is a recipe for […] ripping off […], in the pulling of the wool over the eyes of the regulators about how much risk is actually on the balance sheet, through complex models.
The science, the mathematics, is not being used to enlighten finance but to obscure its practices.  Recently the report on J.P. Morgan's London Whale revealed how tweaking their model, the bank could reduce their apparent exposure from around $40  billion to $20 billion.  The Whale report highlights how finance is actually more committed to 'rituals' around risk management than the 'science' of risk management, and this seems to be facilitated by mathematics.

I think there are a variety of factors involved in this obfuscation, not least the culture of associating mathematics with hidden truths: the mathematician has a magical key to financial reality.  There is also a metaphorical issue.  At the start of the seventeenth century Francis Bacon is associated with using the metaphor of Science as masculine probing and taming the feminine Nature. Towards the end of the seventeenth century the metaphor of finance as 'Lady Credit' similarly emerges, and I think there has been a similar sense that a masculine Science can tame the fickle and unruly Lady Credit.  I think both relationships could improve by becoming less dysfunctional.  Ultimately, and untypically, I associate the failures of contemporary finance not with its own unruliness but with the interference by a deterministic scientific ethos.  Economics might appear  incoherent, but it is finance's coherence in the wrong direction, that causes more real problems.

Tuesday, 10 December 2013

Language barriers in understanding risk and uncertainty

Arthur Charpentier and Dave Giles  reminded me of the disagreement over the terms random and chance variable.
and brought to my mind the misunderstandings that can occur when economists use the word "risk".  Following Frank Knight, economists will often use the word "risk" to represent a 'known probability', where as an "uncertainty" is an 'unkown' probability.  Knight gives a business example:
the bursting of bottles does not introduce an uncertainty or hazard into the business of producing champagne; since in the operations of any producer a practically constant and known proportion of the bottles burst, it does not especially matter even whether the proportion is large or small. The loss becomes a fixed cost in the industry and is passed on to the consumer, like the outlays for labour or materials or any other.
I have never found Knight's distinction particularly helpful because  the word "risk" is commonly used to represent the possibility of a loss:
 OED 1. (Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility. 
with the first example appearing in 1621.  The word originates in post-classical Latin, either from the classical Latin  resecare, "something that cuts" i.e. rocks, or from the Arabic rizq which has a number of meanings: ‘provision, lot, portion allotted by God to each man’, ‘livelihood, sustenance’, hence ‘boon, blessing (given by God)’, ‘property, wealth’, ‘income, wages’, and finally ‘fortune, luck, destiny, chance’.

I prefer to use "chance" in place of Knight's "risk"
OED 1 The falling out or happening of events; the way in which things fall out; fortune; case.
from the Latin cadere, 'to fall', highlighting the relationship between "chance" and the roll of the dice.  The association between "chance" and probability was established by de Moivre with his Doctrine of Chances.  While it is unwise to contradict the Oxford English Dictionary (OED) I tend to think chance is derived from the Dutch word kans.

Ian Hacking discusses the problems Huygens, who like all Dutch mathematicians of the time wrote firstly in Dutch and then translated into Latin for an international audience, had in translating the work kans, which according to Google translate can mean either 'chance' or 'opportunity'.  The obvious Latin equivalent for kans as chance would have been sors, 'lot'. Huygens, or possibly his editor Schootens, chose expectatio, highlighting the association between chance and opportunity, where as Knight had associated it with risk.  An alternative to expectatio that Huygens considered was spes, which was the Latin word for the Christian virtue 'Hope' and is related to a Roman goddess for hope.  The French still employ the word esperance for mathematical expectation while the English use expectation (the Dutch use verwachting which has a number of translations (depending on the context): 'hope, promise, expectation, forecast, prognosis').

Knight did not really innovate his use of "risk", according to the OED it originates with de Morgan
OED2b. The error of an observation or result considered without regard to sign; the probability of an error; the mean weighted loss incurred by a decision taken or estimate made in the face of uncertainty; spec. = mean-square error 
De Morgan defined it in 1832
 This is what Laplace calls l'erreur moyenne à craindre en plus, and the corresponding error en moins is of the same magnitude with a different sign. We shall call it the risk of the observation, the sign of the error not being considered.
 My interpretation is that De Morgan's thinking is closely related to Quetelet's - a deviation from the norm is a risk - see my previous post on this.

The word "random" seems very inappropriate, apparently originating in Middle French randon, meaning 'speed' or 'hast'.  We have
OED A1a. Impetuosity, great speed, force, or violence (in riding, running, striking, etc.); chiefly in with (also in) great random .  (earliest occurrence 1325)
OED A2a. Gunnery. The range of a piece of ordnance, esp. the long or full range obtained by elevating the muzzle of the piece.  (earliest occurrence 1560)
OED A 3. A haphazard or aimless course.  (earliest occurrence 1565)
and the first use as an adverb and adjective
OED B1 At random, randomly. (earliest occurrence 1619)
OED C1a. Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard.  (earliest occurrence 1655)
The first use in mathematics was in 1884:
Applying the Calculus of Probabilities..to the question of whether the distribution of the fixed stars can be regarded as the result of a random sprinkling.
A mathematician might use stochastic as an adjective in preference to random or chance.  The word derives from the Greek 'to aim at a mark, guess' according to the OED, my Greek PhD supervisor said it related to shooting arrows.  Its first appearance in English was in 1662
But yet there wanted not some beams of light to guide men in the exercise of their Stocastick faculty.
and then in 1688 in Cudworths Treatise on Free Will
There is need and use of this stochastical judging and opining concerning truth and falsehood in human life.
What do I conclude:  there is no certainty in what we are talking about when using words like risk, chance, random etc. and it is not surprising we have difficulty dealing with the concept.




Tuesday, 3 December 2013

The rational man, the average man and the replacement of deliberation by will


A few weeks ago I changed my broadband supplier. Things were a bit ropey the first weekend and my son had problems watching Lego Star Wars videos, and he generously shared his frustration with me. The following week I had a customer service call, and when I gave the service 3 out of 10, the person on the line said “That’s great”. When I queried why 3 out of 10 was great, the answer I got could be interpreted as 3/10 facilitated a Normal distribution of satisfaction. ‘Big data’, Bayesian inference and such like are big themes in the contemporary Zeitgeist and I sometimes think that there is an attitude that if a distribution is not Normal, it is pathological, even in the case of customer satisfaction. This is an issue to me, as someone who believes that, in science at least, dependence is far more important than the independence that creates the link between Normality and the Central Limit Theorem.

This piece is about how the Romantic’s ‘average man’, the personification of the Law of Large Numbers and the Central Limit Theorem, replaced the Enlightenment’s ‘rational man’ and some thoughts on the consequences.  The post meanders from the Petersburg game, through Enlightenment education, to Laplacian determinism, social physics, biometrics, MacIntyre's Virtue Ethics and Austrian economics.
Jean Le Rond d’Alembert is famous in mathematics for solving the problem of a vibrating violin string and solving the fundamental differential ‘wave equation’ in the process. He was abandoned as a baby by his mother outside the Parisian church of St Jean Baptiste le Rond in 1717 and adopted by a relatively poor family. It turned out that d’Alembert’s natural father was a chevalier and a distinguished army officer, Louis-Camus Destouches, while his mother was a famous writer and socialite, Claudine Guérin de Tencin, Baroness of Saint-Martin de Ré. D’Alembert’s adoptive family was provided with money by his natural parents for Jean to study and he became a lawyer when he was twenty-one. He taught himself mathematics, being admitted to the Académie Royal des Sciences in 1741 and developing a reputation as one of Europe’s leading mathematicians by his mid-thirties.

D’Alembert also became a well known figure in Parisian society, living, ‘unconventionally’ with a famous salon-owner, Julie de Lespinasse, and working with Denis Diderot on the French Enlightenment’s Encyclopédie, which paved the way for the French Revolution. D’Alembert was sympathetic to the Jansenists, the sect Pascal was associated with, and played a role in the expulsion of the Jesuits from France in the early 1760s. Despite becoming the Secretary of the Académie Royal des Sciences, the most influential scientist in France, d’Alembert, on account of his atheism, was buried in an unmarked grave when he died in 1783.

D’Alembert lived at the height of the debates around the Petersburg game and took a rather extreme view about probability; since perfect symmetry is impossible probability can never be objective. Because, science was supposed to be objective during the Enlightenment [7, p 11], the apparent subjectivity of probability led d’Alembert to be sceptical about the whole field [4]. In fact he was possibly the first person to criticise probability for ignoring psychology, when he commented that a paper by Daniel Bernoulli advocating smallpox inoculation by calculating the gain in life expectancy ignored that ‘reasonable men’ might well trade the long-term risk of small-pox for the short term risk associated with inoculation [8, p 18].

Despite this scepticism d’Alembert did provide some insight into the Petersburg game. For him, the apparent paradox arose because the game could continue for ever, for an infinite number of coin tosses. It was absurd to believe the game could offer an infinite payoff, at some point time and money would run out, and d’Alembert suggested that the game should end after the person, or the ‘casino’, putting up a stake, was bankrupted.

This line of thought was developed by the Marquis de Condorcet. Condorcet was born legitimately into the nobility, in 1743, and so was twenty-six years younger than the less blessed d’Alembert. In his early twenties, after a good education, he wrote a treatise on integration, and was elected to the Académie Royal des Sciences in 1769. After publishing another work on integration he met Louis XVI’s finance minister, Turgot, and following Newton’s footsteps, was appointed Inspector General of the Paris (French) Mint in 1774. In spite of being a member of the Ancien Régime, Condorcet had liberal views, supporting women’s rights and opposing the Church and slavery, and when the Revolution started he was elected to Revolutionary government. However, this was not a good position to be in when the Terror began, and Condorcet went into hiding in October 1793. Fearing his political opponents were on to him he fled Paris in March 1794, but was almost immediately captured and imprisoned, dyeing in unexplained circumstances at the end of March, four months before the end of the Terror.

Condorcet is important in linking mathematics to the social sciences, possibly through the influence of his boss, Turgot. During the height of Louis XIV’s reign the dominant economic theory was mercantilism, which can be summed up as the belief that wealth equated to gold. Around the time of the Seven Years War and the subsequent Mississippi Bubble, a new theory emerged in France in which wealth was determined not by coin but by what a country produces, in particular, its agricultural production. These ideas were developed in the mid-eighteenth century as physiocracy (’rule by nature’), particularly by Turgot and Quesnay, who achieved fame as the royal physician. Quesnay in his ‘Economic Table’, saw the economy as a system where by the surplus of agricultural production flows through society, enriching it.

Physiocracy was popular with the aristocrats of the Ancien Régime, because it argued that all wealth originated from the land, and so the landowning class was central to the economy, with merchants being mere facilitators of the process. While the Scotsman Adam Smith is often cited as the first modern economist, he was in fact developing, in a none the less revolutionary way, the ideas of the French physiocrats [1, p 61], [13, p 165]. One of Smith’s important contributions was that it was not land, but labour, that was at the root of wealth.

Despite working at the Mint, Condorcet did not produce anything of significance in economics, though his most important work , ‘Essay on the Application of Analysis to the Probability of Majority Decisions’, was in social science. In the essay he shows that in a voting system, it is possible to have a majority preferring option A over B, another majority preferring option B over C while there is another majority that prefers C over A; no option is dominant, which is known as Condorcet’s paradox. Another influential work was ‘Historical Picture of the Progress of the Human Mind’, written while in hiding and arguing that expanding knowledge, in both physical and social sciences, would lead to a more just, equitable and prosperous world.

In relation to the Petersburg game, Condorcet starts of by making a trivial, but none the less important, observation. According to Huygens, if you play a game where you will win 10 francs on the toss of a head and lose 10 francs on the toss of a tails, the mathematical expectation is that you will win (lose) nothing. But the reality is that you would win 10, or lose 10, francs. Condorcet realised that the mathematical expectation gave the price of the Petersbug game over the long run, in fact a very long run that would accommodate an infinite number of games, each game having the potential to last an infinite number of tosses.

Having made this observation Condorcet put more structure on the problem; say the number of tosses was limited, by the potential size of the winning pot. According the the philosopher and science historian, Gérard Jorland, Condorcet then solved the problem by thinking “of the game as a trade off between certainty and uncertainty” and established its value was a function on the maximum number of coin tosses possible [10, p 169]– the value of the game was only infinite if there could be an infinite number of tosses.

According to Jorland, the Petersburg problem “would have most likely faded away” had Daniel Bernoulli’s treatment of it had not been endorsed by the man sometimes referred to as the ‘Newton of France’, Pierre-Simon Laplace. Laplace was born, in 1749, into a comfortable household in Normandy, the family were farmers but his father was also a small scale cider merchant. He enrolled as a theology undergraduate at the University of Caen when he was 16, but left for Paris in 1768, without a degree but with a letter of introduction to d’Alembert. D’Alembert quickly recognised Laplace’s skills, and as well as taking responsibility for his mathematical studies secured him a teaching position at the École Militaire, where he taught the young Napoleon Bonaparte in 1785. Laplace’s early work was related to calculus, and by 1773 he had presented 13 papers to the AcadémieRoyal des Sciences, despite this productivity Laplace had failed twice, in 1771 and 1772, to be elected to the Académie, prompting d’Alembert to write to Lagrange, who was the mathematical director at the Berlin Academy of Science, asking whether it would be possible to get Laplace elected to the Berlin Academy and a job found for him in Germany. However, before Lagrange could reply, Condorcet, who was Secretary at the Académie, pulled some strings and Laplace was admitted to the centre of French science in 1773.

Laplace’s reputation is built on two pairs of mathematical texts, ‘Celestial Mechanics’ with ‘The system of the world’ (1796) and‘Analytic Probability Theory’ (1812) with ‘Probability Theory’ (1819). The first book in each pair was a technical, mathematical, description of the theory while the second book in each pair was a description for general audiences.‘Celestial Mechanics’ is now regarded as the culmination of Newtonian physics while in ‘Analytic Probability Theory’ Laplace closed the discussion on the Petersburg Game. Laplace adopted Daniel Bernoulli’s approach, re-stating his three results as [10, pp 172–176]
  1. A mathematically fair game is always a losing game under ‘moral expectation’ (utility theory).
  2. There is always an advantage in dividing risks (diversification).
  3. There may be an advantage to insure.
Laplace solved the paradox that‘moral expectation’ differed from ‘mathematical expectation’, by showing that if games could be repeated infinitely many times, or risks divided into infinitesimally small packages, then ‘moral expectation’ equalled ‘mathematical expectation’.

Laplace’s mentor, Condorcet, believed that nature followed constant laws and these could be deduced by observation,“The constant experience that facts conform to these principles is our sole reason for believing them” [5, quoted on p 191]. Laplace is closely associated with this idea, that of of ‘causal determinism’, and is encapsulated in his ‘Philosophical Essay on Probabilities’(1814),
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
and he goes on to say that
[and we owe] to the weakness of the human spirit [i.e. it is not as intelligent as the ‘intellect’] one of the most delicate and most ingenious theories of Mathematics, which is the science of chance or probability
For Laplace, the roll of a dice is not random, given precise information of the position, orientation and velocity of a dice when it left a cup, the result of the roll was perfectly predictable [11, p 65]. At the heart of Laplace’s determinism was knowledge and probability was a measure of ignorance, and not of ‘chance’. It is in this respect that he is close to James Bernoulli [8, p 11], but, as a product of the Enlightenment, Bernoulli’s God is replaced by‘an intelect’, Laplace’s demon. The positions of Laplace and Bernoulli, differs significantly from the ‘Ancients’ who, distinguished between the predictable (eclipses), the foreseeable (the weather) and the random (stumbling on a treasure).


A persistent problem with determinism is that it can lead to a collapse in moral responsibility. In the syllogistic tradition, starting with the premise that humans have no free will it is easy to come to the conclusion that anything goes. For example,
P1.
Actions are either pre-determined or random.
P2.
If an action is pre-determined, the entity who
performed the action is not morally responsible.
P3.
If an action is random, the entity who performed
the action is not morally responsible.


C.
No one is morally responsible for their actions.

This was not simply a philosophical issue, in the mid-nineteenth century there was real concern that by allowing steam boiler makers, for example, to insure themselves against deadly explosion of their products would “undermine the very virtues of foresight and responsibility”. Removing risk seemed to remove peoples sense of responsibility [5, p 188].

The issue is that, for the syllogistic method to come up with an answer that most people would be comfortable with, we need to include morality as a premise, rather than looking for it as a conclusion. Doing this, we can change the argument to
P1.
People should be held morally responsible for their actions.
P2.
If someone (i.e. a child) cannot forsee the consequences of
their actions they cannot be held morally responsible
for their actions.
C.
Moral responsibility requires that there be foresight.

One of the consequence of The Enlightenment was a belief that in order to be ‘morally responsible’, people needed to have a degree of foresight, which could only be obtained through knowledge, or science and today, this can be seen as the fundamental purpose of science, to enable people to take responsibility for their actions, whether related to the safety of industry or personal diet. This was reflected in Wilhelm von Humboldt’s view that education should turn ‘children into people’,individuals capable of participating in the polis/civitas, rather than ‘cobbler’s sons into cobblers’,Francis Bacon’s utilitarian view that ‘knowledge is power’.

The development of probability in the eighteenth century had been motivated by the view that while absolute certainty was beyond human grasp, mathematics, on which the Scientific Revolution had been based, might be a way of discerning regularity out of uncertainty [5, pp xi–xvi]. In this vein, the late-eighteenth century mathematicians regarded probability as way of turning rationality into an algorithm, which could then be distributed to everyone to help them to be able to be more responsible, to become l’homme éclaire, the clear thinking, rational, Enlightenment ideal [5, p108–111].

The tangible product was Gauss’s (nineteenth century) approach to dealing with astronomical errors that proved so invaluable in the physical sciences that it was adopted in the social sciences, in the field of social physics. Social physics was invented by the Belgian astronomer Adolphe Quetelet who applied Gauss’s theories to human behaviour in his 1835 work ‘On man and the development of his faculties, or Essay on Social Physics’. The term ‘social physics’ had been coined by the French philosopher, Auguste Comte, who, as part of his overall philosophy of science, believed that first humans would develop an understanding of the ‘scientific method’ through the physical sciences, which they would then be able to apply to, the harder and more important,‘social sciences’. When Comte realised that Quetelet had adopted his term of‘social physics’, Comte adopted the more familiar term, sociology for the science of society.

An explosion of data collection after 1820 enabled a number of people to observe that certain ‘social’ statistics, such as murder, suicide and marriage rates were remarkably stable. Quetelet explained this in terms of Gaussian errors. L’homme moyen,‘the average man’, was driven by ‘social forces’, such as egoism, social conventions, and so on, which resulted penchants, for marriage, murder and suicide, which were reflected as the means of social statistics. Deviations from the means were as a consequence of the social equivalent of accidental or inconstant physical phenomena, such as friction or atmospheric variation [11, Section 5], [15, pp108–110].

These theories were popular with the public. France, like the rest of Europe, had been in political turmoil between the fall of Napoleon Bonaparte in 1813 and the creation of the Second Empire in 1852, following the 1848 Revolution (setting the prototype for the turmoil between the 1920s and 1970s). During the 1820s there was an explosion in the number of newspapers published in Paris, and these papers fed the middle classes a diet of social statistics that acted as a barometer to predict unrest [15, p 106]. The penchant for murder implied that murder was a consequence of society, the forces that created the penchant were responsible and so the individual murderer could be seen as an ‘innocent’ victim of the ills of society.

Despite the public popularity of‘social physics’, Quetelet’s l’homme moyen was not popular with many academics. Quetelet had based this theory on an initial observation that physical traits, such as heights, seemed to be Normally distributed. The problem was that, apart from the fact that heights are not Normally distributed (the incidence of giants and dwarfs in the real population exceeds the expected number based on a Normal distribution of heights Quetelet was confusing ‘looks like’ with ‘is’), since murders and suicides are ‘rare’,there can be little confidence in the statistics, and many experts of the time, including Comte [3, p 39], rejected Quetelet’s theories on the basis that they did not believe that ‘laws of society’ could be identified simply by examining statistics and observing correlations between data ( [8, pp 47–48], [11, p76], [15, p 112]) and even Quetelet, later in life counselled against over-reliance in statistics [16].

Beyond these practical criticisms there were philosophical objections. The l’homme moyen was a ‘statistical’ composite of all society who was governed by Condorcet’s universal and constant laws. L’homme moyen was nothing like the Enlightenment’s l’homme eclaireé, the person who applied rational thinking to guide their action, thinking that was guided by science and reason and not statistics. The decline of Quetelet’s theorems in Europe coincides not just with the political stability of the Second Empire, but a change in attitude. The poor were no longer unfortunate as a consequence of their appalling living conditions, but through their individual failings, like drunkenness or laziness. The second half of the nineteenth century was about ‘self-help’ not the causality of ‘social physics’[15, p 113].

However, Quetelet’s quantitative methods would take hold in Britain. In 1850, Sir John Herschel, one of the key figures of the Age of Wonder, reviewed Quetelet’s works and concluded that the Law of Errors was fundamental to science [2, p184-185]. In 1857, Henry Thomas Buckle published the first part of a History of Civilisation in England, which was an explanation of the superiority of Europe, and England in particular, based on Quetelet’s social physics.  Francis Galton combined the work of his half-cousin, Charles Darwin, with that of Quetelet to come up with a statistical model of Hereditary Genius in the 1870s and in the process introduced the concepts of ‘reversion to the mean’ and statistical correlation. At the start of the twentieth century Galton’s statistical approach, was championed by Karl Pearson who said that the questions of evolution and genetics were “in the first place statistical, in the second place statistical and only in the third place biological” [8, p 147], and the aim of biologists following this approach was to “seek hidden causes or true values” behind the observed data processed with statistical tools [6, p 7].

In the late-nineteenth century the approach of these, predominantly, British biometricians collided, pretty much, head on with those that the monk, Gregor Mendel. In the 1860s Mendel looked at the mechanism of breeding hybrids and essentially developed a theory of how variation appears in living organism by experimenting on individual peas plants in his garden, rather than referring to population statistics. Mendel was interested in how does a microscopic effect, how two pea plants producing a hybrid, manifest itself at the macroscopic level, in statistical regularities, this is essentially a probabilistic, mathematical, approach: going from the particular to the general [9, pp 54-56].

The debate in biology between the biometric and Mendelian approaches was one about how to improve society through the process of heredity. If solved correctly, the social engineers of the late nineteenth century believed they could breed out laziness and drunkenness through the ‘science’ of eugenics. Could the secrets of heredity be discovered by observing statistical correlations, or did the solution lie in identifying the biological law [8, pp 145–152]. The biometric and Mendelian approaches were eventually reconciled by the “statistically sophisticated Mendelian”, Ronald Aylmer Fisher [8, p 149] in his 1930 book The Genetical Theory of Natural Selection and of whom Anders Hald has described as “a genius who almost single-handedly created the foundations for modern statistical science”.

Lots of people have suggested I read Alisdair MacIntyre’s After Virtue, which I recently attempted, but the book is ‘thick’ and I have resorted to Reading Alasdair MacIntyres After Virtue as a gentle introduction. MacIntyre’s thesis is that sometime in the eighteenth/nineteenth centuries Western philosophy lost its ability to address moral issues. Essentially, modern moral philosophy is a Nietzschean battle of wills, with opposing sides in a debate employing scientific authority and raw emotion to justify pre-determined political objectives (think climate science). (Lutz claims that) MacIntyre associates the origins of this failure are in Ockham’s Nominalism and the influence of eighteenth century Augustinian philosophers (Locke, Hume, Jansenists, etc.). This is immediately interesting to me as I think both themes are important, for different reasons.

What has particularly struck me is that Hume is presented as arguing that an individual can calculate what is in their best interests and hence choose a course of action to take, perhaps outside what is the moral norm. I need to explore this further, because it relates Hume’s ideas about individual autonomy to a belief in causal determinism: that the agent can relationally foresee the future. Also, Hume argued that reason was subservient to the passions, that is there are animal behaviours that will inevitably over-rule reason. I see this theme featuring in eugenics, sociobiology and even in the collection of articles Moral Markets, and is not a proven phenomena.

Oskar Morgenstern, and the Austrian economists more generally, was concerned with this problem. When Morgenstern was twelve, in 1914, his family moved to Vienna and, in his own words he was“deflected to social sciences by war [the First World War]; inflation and revolution in the streets, home difficulties but not by deep intellectual attraction” [14, p 128]. Morgenstern studied economics at Vienna, then dominated by Ludwig von Mises, gaining his doctorate in 1925. He then travelled to Cambridge and the United States, returning to Vienna for his habilitation thesis in 1928 was entitled Wirtshaftsprognose (‘Economic prediction’) and, in the Austrian tradition, rejected the use of mathematics in favour of a philosophical consideration of the difficulties of forecasting in economics when other agents are acting in the economy [12, p 51]. Following his habilitation, Morgenstern was appointed a lecturer at Vienna and then the director of the Vienna Institute of Business Cycle Research.

Unlike many of his economic colleagues, Morgenstern became involved in the Vienna Circle of mathematicians and philosophers, never as an active participant but as a bridge between them and economics [12, p 52]. In 1935 he presented a paper to the mathematicians associated with the Vienna Circle on the problem of perfect foresight. Menger often referred to an episode in Conan Doyle’s story‘The Final Problem’, which describes the ‘final’ intellectual battle between Sherlock Holmes and the fallen mathematician, Professor Moriarty which results in them both falling to apparent death in the Reichenbach Falls. At the start of the adventure Holmes and Watson are trying to flee to the Continent, pursued by the murderous Moriarty. Watson and Holmes are sat on the train to the Channel ports
[Watson]“As this is an express and the boat runs in connection with it, I should think we have shaken [Moriarty] off very effectively.”
“My dear Watson, you evidently did not realise my meaning when I said that this man may be taken as being quite on the same intellectual plane as myself. You do not imagine that if I were the pursuer I should allow myself to be baffled by so slight an obstacle. Why, then, should you think so lowly of him?”
For Morgenstern this captured the fundamental problem of economics. While Frank Knight had earlier realised that profit was impossible without unquantifiable uncertainty, Mogernstern came to think that perfect foresight was pointless in economics. If the world was full of Laplacian demons making rational decisions then everything would, in effect, grind to a halt with the economy reaching its equilibrium where it would remain forever. Morgenstern writes
always there is exhibited an endless chain of reciprocally conjectural reactions and counter-reactions. This chain can never be broken by an act of knowledge but always through an arbitrary act — a resolution. [14, quoting Mogernstern on p 129]
The imp of the perverse would confound Laplace’s demon by doing something unexpected, irrational and inspired. It was this feature of the social science that makes it fundamentally different from the natural sciences, since physics is never perverse.

MacIntyre’s alternative to the bun-fighting that typifies modern moral philosophy is a return to an Aristotelian approach characterised by public deliberation considering not only means to predetermined ends but also what ends are in the ‘public good’. This, at first sight, appears to be a Pragmatic argument. However, I think there must be a difference because of MacIntyre’s apparent criticism of Ockham’s Nominalism. I think this might be important, specifically I have always associated Ockham’s Nominalism with his ‘doubt’ in the human ability to understand God, i.e. it is related to uncertainty. My initial thoughts are at the root of MacIntyre, as at the root of Aristotle, is Realism.

What interests me is that the idea that reciprocity is at the heart of Financial Mathematics is unorthodox today, but it would have been orthodox in the eighteenth century. Academic debate in the eighteenth century centred on (semi) public deliberation, at the Academies, salons or the monthly meeting of the Lunar Men. This approach disappears in the first half of the nineteenth century; individual scientists retreat to the lab, and perform magical experiments to the public, or embark on journeys of exploration enduring a process that moulds their genius: Alexander Humboldt, Darwin, Lewis & Clark. Around the same time, Laplace's causal determinism manifests itself in social science as social physics, with people being `governed' by the Normal distribution. Just as today businesses are applying Bayesian statistics to make decisions for us, taking away the requirement to discuss and the ability to act arbitrarily.


References

[1] M. Blaug. Economic Theory in Retrospect. Heinemann, second edition, 1968.
[2] S. G. Brush. The Kind of motion we call heat: A history of the kinetic theory of gases in the 19th century. North-Holland, 1976.
[3] I. B. Cohen. Revolutions, Scientific and Pobabilistic. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[4] L. J. Daston. D’Alembert’s critique of probability theory. Historia Mathematica, 6:259–279, 1979.
[5] L. J. Daston. Classical Probability in the Enlightenment. Princeton University Press, 1998.
[6] G. Gigerenzer. The Probabilistic Revolution in Psychology - an Overview. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[7] G. Gigerenzer. Probabilistic Thinking and the Fight against Subjectivity. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution:Volume 2: Ideas in the Sciences. MIT Press, 1987.
[8] G. Gigerenzer. The Empire of Chance: how probability changed science and everyday life. Cambridge University Press, 1989.
[9] R. M. Henig. A Monk and Two Peas: The Story of Gregor Mendel and the Discovery of Genetics. Phoenix, 2001.
[10] G. Jorland. The Saint Petersburg Paradox 1713 – 1937. In L. Kruger, L. J. Daston, M. Heidelberger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[11] L. Kruger. The Slow Rise of Probibalism. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[12] R. J. Leonard. Creating a context for game theory. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 29–76. Duke University Press, 1992.
[13] P. Mirowski. More Heat than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge University Press, 1989.
[14] P. Mirowski. What were von Neumannn and Morgenstern trying to accomplish?. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 113–150. Duke University Press, 1992.
[15] A. Oberschall. The Two Empirical Roots of Social Theory and the Probability Revolution. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[16] O. B. Sheynin. Lies, damned lies and statistics. NTM Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin, 11(3):191–193, 2001.