Tuesday 3 December 2013

The rational man, the average man and the replacement of deliberation by will


A few weeks ago I changed my broadband supplier. Things were a bit ropey the first weekend and my son had problems watching Lego Star Wars videos, and he generously shared his frustration with me. The following week I had a customer service call, and when I gave the service 3 out of 10, the person on the line said “That’s great”. When I queried why 3 out of 10 was great, the answer I got could be interpreted as 3/10 facilitated a Normal distribution of satisfaction. ‘Big data’, Bayesian inference and such like are big themes in the contemporary Zeitgeist and I sometimes think that there is an attitude that if a distribution is not Normal, it is pathological, even in the case of customer satisfaction. This is an issue to me, as someone who believes that, in science at least, dependence is far more important than the independence that creates the link between Normality and the Central Limit Theorem.

This piece is about how the Romantic’s ‘average man’, the personification of the Law of Large Numbers and the Central Limit Theorem, replaced the Enlightenment’s ‘rational man’ and some thoughts on the consequences.  The post meanders from the Petersburg game, through Enlightenment education, to Laplacian determinism, social physics, biometrics, MacIntyre's Virtue Ethics and Austrian economics.
Jean Le Rond d’Alembert is famous in mathematics for solving the problem of a vibrating violin string and solving the fundamental differential ‘wave equation’ in the process. He was abandoned as a baby by his mother outside the Parisian church of St Jean Baptiste le Rond in 1717 and adopted by a relatively poor family. It turned out that d’Alembert’s natural father was a chevalier and a distinguished army officer, Louis-Camus Destouches, while his mother was a famous writer and socialite, Claudine Guérin de Tencin, Baroness of Saint-Martin de Ré. D’Alembert’s adoptive family was provided with money by his natural parents for Jean to study and he became a lawyer when he was twenty-one. He taught himself mathematics, being admitted to the Académie Royal des Sciences in 1741 and developing a reputation as one of Europe’s leading mathematicians by his mid-thirties.

D’Alembert also became a well known figure in Parisian society, living, ‘unconventionally’ with a famous salon-owner, Julie de Lespinasse, and working with Denis Diderot on the French Enlightenment’s Encyclopédie, which paved the way for the French Revolution. D’Alembert was sympathetic to the Jansenists, the sect Pascal was associated with, and played a role in the expulsion of the Jesuits from France in the early 1760s. Despite becoming the Secretary of the Académie Royal des Sciences, the most influential scientist in France, d’Alembert, on account of his atheism, was buried in an unmarked grave when he died in 1783.

D’Alembert lived at the height of the debates around the Petersburg game and took a rather extreme view about probability; since perfect symmetry is impossible probability can never be objective. Because, science was supposed to be objective during the Enlightenment [7, p 11], the apparent subjectivity of probability led d’Alembert to be sceptical about the whole field [4]. In fact he was possibly the first person to criticise probability for ignoring psychology, when he commented that a paper by Daniel Bernoulli advocating smallpox inoculation by calculating the gain in life expectancy ignored that ‘reasonable men’ might well trade the long-term risk of small-pox for the short term risk associated with inoculation [8, p 18].

Despite this scepticism d’Alembert did provide some insight into the Petersburg game. For him, the apparent paradox arose because the game could continue for ever, for an infinite number of coin tosses. It was absurd to believe the game could offer an infinite payoff, at some point time and money would run out, and d’Alembert suggested that the game should end after the person, or the ‘casino’, putting up a stake, was bankrupted.

This line of thought was developed by the Marquis de Condorcet. Condorcet was born legitimately into the nobility, in 1743, and so was twenty-six years younger than the less blessed d’Alembert. In his early twenties, after a good education, he wrote a treatise on integration, and was elected to the Académie Royal des Sciences in 1769. After publishing another work on integration he met Louis XVI’s finance minister, Turgot, and following Newton’s footsteps, was appointed Inspector General of the Paris (French) Mint in 1774. In spite of being a member of the Ancien Régime, Condorcet had liberal views, supporting women’s rights and opposing the Church and slavery, and when the Revolution started he was elected to Revolutionary government. However, this was not a good position to be in when the Terror began, and Condorcet went into hiding in October 1793. Fearing his political opponents were on to him he fled Paris in March 1794, but was almost immediately captured and imprisoned, dyeing in unexplained circumstances at the end of March, four months before the end of the Terror.

Condorcet is important in linking mathematics to the social sciences, possibly through the influence of his boss, Turgot. During the height of Louis XIV’s reign the dominant economic theory was mercantilism, which can be summed up as the belief that wealth equated to gold. Around the time of the Seven Years War and the subsequent Mississippi Bubble, a new theory emerged in France in which wealth was determined not by coin but by what a country produces, in particular, its agricultural production. These ideas were developed in the mid-eighteenth century as physiocracy (’rule by nature’), particularly by Turgot and Quesnay, who achieved fame as the royal physician. Quesnay in his ‘Economic Table’, saw the economy as a system where by the surplus of agricultural production flows through society, enriching it.

Physiocracy was popular with the aristocrats of the Ancien Régime, because it argued that all wealth originated from the land, and so the landowning class was central to the economy, with merchants being mere facilitators of the process. While the Scotsman Adam Smith is often cited as the first modern economist, he was in fact developing, in a none the less revolutionary way, the ideas of the French physiocrats [1, p 61], [13, p 165]. One of Smith’s important contributions was that it was not land, but labour, that was at the root of wealth.

Despite working at the Mint, Condorcet did not produce anything of significance in economics, though his most important work , ‘Essay on the Application of Analysis to the Probability of Majority Decisions’, was in social science. In the essay he shows that in a voting system, it is possible to have a majority preferring option A over B, another majority preferring option B over C while there is another majority that prefers C over A; no option is dominant, which is known as Condorcet’s paradox. Another influential work was ‘Historical Picture of the Progress of the Human Mind’, written while in hiding and arguing that expanding knowledge, in both physical and social sciences, would lead to a more just, equitable and prosperous world.

In relation to the Petersburg game, Condorcet starts of by making a trivial, but none the less important, observation. According to Huygens, if you play a game where you will win 10 francs on the toss of a head and lose 10 francs on the toss of a tails, the mathematical expectation is that you will win (lose) nothing. But the reality is that you would win 10, or lose 10, francs. Condorcet realised that the mathematical expectation gave the price of the Petersbug game over the long run, in fact a very long run that would accommodate an infinite number of games, each game having the potential to last an infinite number of tosses.

Having made this observation Condorcet put more structure on the problem; say the number of tosses was limited, by the potential size of the winning pot. According the the philosopher and science historian, Gérard Jorland, Condorcet then solved the problem by thinking “of the game as a trade off between certainty and uncertainty” and established its value was a function on the maximum number of coin tosses possible [10, p 169]– the value of the game was only infinite if there could be an infinite number of tosses.

According to Jorland, the Petersburg problem “would have most likely faded away” had Daniel Bernoulli’s treatment of it had not been endorsed by the man sometimes referred to as the ‘Newton of France’, Pierre-Simon Laplace. Laplace was born, in 1749, into a comfortable household in Normandy, the family were farmers but his father was also a small scale cider merchant. He enrolled as a theology undergraduate at the University of Caen when he was 16, but left for Paris in 1768, without a degree but with a letter of introduction to d’Alembert. D’Alembert quickly recognised Laplace’s skills, and as well as taking responsibility for his mathematical studies secured him a teaching position at the École Militaire, where he taught the young Napoleon Bonaparte in 1785. Laplace’s early work was related to calculus, and by 1773 he had presented 13 papers to the AcadémieRoyal des Sciences, despite this productivity Laplace had failed twice, in 1771 and 1772, to be elected to the Académie, prompting d’Alembert to write to Lagrange, who was the mathematical director at the Berlin Academy of Science, asking whether it would be possible to get Laplace elected to the Berlin Academy and a job found for him in Germany. However, before Lagrange could reply, Condorcet, who was Secretary at the Académie, pulled some strings and Laplace was admitted to the centre of French science in 1773.

Laplace’s reputation is built on two pairs of mathematical texts, ‘Celestial Mechanics’ with ‘The system of the world’ (1796) and‘Analytic Probability Theory’ (1812) with ‘Probability Theory’ (1819). The first book in each pair was a technical, mathematical, description of the theory while the second book in each pair was a description for general audiences.‘Celestial Mechanics’ is now regarded as the culmination of Newtonian physics while in ‘Analytic Probability Theory’ Laplace closed the discussion on the Petersburg Game. Laplace adopted Daniel Bernoulli’s approach, re-stating his three results as [10, pp 172–176]
  1. A mathematically fair game is always a losing game under ‘moral expectation’ (utility theory).
  2. There is always an advantage in dividing risks (diversification).
  3. There may be an advantage to insure.
Laplace solved the paradox that‘moral expectation’ differed from ‘mathematical expectation’, by showing that if games could be repeated infinitely many times, or risks divided into infinitesimally small packages, then ‘moral expectation’ equalled ‘mathematical expectation’.

Laplace’s mentor, Condorcet, believed that nature followed constant laws and these could be deduced by observation,“The constant experience that facts conform to these principles is our sole reason for believing them” [5, quoted on p 191]. Laplace is closely associated with this idea, that of of ‘causal determinism’, and is encapsulated in his ‘Philosophical Essay on Probabilities’(1814),
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
and he goes on to say that
[and we owe] to the weakness of the human spirit [i.e. it is not as intelligent as the ‘intellect’] one of the most delicate and most ingenious theories of Mathematics, which is the science of chance or probability
For Laplace, the roll of a dice is not random, given precise information of the position, orientation and velocity of a dice when it left a cup, the result of the roll was perfectly predictable [11, p 65]. At the heart of Laplace’s determinism was knowledge and probability was a measure of ignorance, and not of ‘chance’. It is in this respect that he is close to James Bernoulli [8, p 11], but, as a product of the Enlightenment, Bernoulli’s God is replaced by‘an intelect’, Laplace’s demon. The positions of Laplace and Bernoulli, differs significantly from the ‘Ancients’ who, distinguished between the predictable (eclipses), the foreseeable (the weather) and the random (stumbling on a treasure).


A persistent problem with determinism is that it can lead to a collapse in moral responsibility. In the syllogistic tradition, starting with the premise that humans have no free will it is easy to come to the conclusion that anything goes. For example,
P1.
Actions are either pre-determined or random.
P2.
If an action is pre-determined, the entity who
performed the action is not morally responsible.
P3.
If an action is random, the entity who performed
the action is not morally responsible.


C.
No one is morally responsible for their actions.

This was not simply a philosophical issue, in the mid-nineteenth century there was real concern that by allowing steam boiler makers, for example, to insure themselves against deadly explosion of their products would “undermine the very virtues of foresight and responsibility”. Removing risk seemed to remove peoples sense of responsibility [5, p 188].

The issue is that, for the syllogistic method to come up with an answer that most people would be comfortable with, we need to include morality as a premise, rather than looking for it as a conclusion. Doing this, we can change the argument to
P1.
People should be held morally responsible for their actions.
P2.
If someone (i.e. a child) cannot forsee the consequences of
their actions they cannot be held morally responsible
for their actions.
C.
Moral responsibility requires that there be foresight.

One of the consequence of The Enlightenment was a belief that in order to be ‘morally responsible’, people needed to have a degree of foresight, which could only be obtained through knowledge, or science and today, this can be seen as the fundamental purpose of science, to enable people to take responsibility for their actions, whether related to the safety of industry or personal diet. This was reflected in Wilhelm von Humboldt’s view that education should turn ‘children into people’,individuals capable of participating in the polis/civitas, rather than ‘cobbler’s sons into cobblers’,Francis Bacon’s utilitarian view that ‘knowledge is power’.

The development of probability in the eighteenth century had been motivated by the view that while absolute certainty was beyond human grasp, mathematics, on which the Scientific Revolution had been based, might be a way of discerning regularity out of uncertainty [5, pp xi–xvi]. In this vein, the late-eighteenth century mathematicians regarded probability as way of turning rationality into an algorithm, which could then be distributed to everyone to help them to be able to be more responsible, to become l’homme éclaire, the clear thinking, rational, Enlightenment ideal [5, p108–111].

The tangible product was Gauss’s (nineteenth century) approach to dealing with astronomical errors that proved so invaluable in the physical sciences that it was adopted in the social sciences, in the field of social physics. Social physics was invented by the Belgian astronomer Adolphe Quetelet who applied Gauss’s theories to human behaviour in his 1835 work ‘On man and the development of his faculties, or Essay on Social Physics’. The term ‘social physics’ had been coined by the French philosopher, Auguste Comte, who, as part of his overall philosophy of science, believed that first humans would develop an understanding of the ‘scientific method’ through the physical sciences, which they would then be able to apply to, the harder and more important,‘social sciences’. When Comte realised that Quetelet had adopted his term of‘social physics’, Comte adopted the more familiar term, sociology for the science of society.

An explosion of data collection after 1820 enabled a number of people to observe that certain ‘social’ statistics, such as murder, suicide and marriage rates were remarkably stable. Quetelet explained this in terms of Gaussian errors. L’homme moyen,‘the average man’, was driven by ‘social forces’, such as egoism, social conventions, and so on, which resulted penchants, for marriage, murder and suicide, which were reflected as the means of social statistics. Deviations from the means were as a consequence of the social equivalent of accidental or inconstant physical phenomena, such as friction or atmospheric variation [11, Section 5], [15, pp108–110].

These theories were popular with the public. France, like the rest of Europe, had been in political turmoil between the fall of Napoleon Bonaparte in 1813 and the creation of the Second Empire in 1852, following the 1848 Revolution (setting the prototype for the turmoil between the 1920s and 1970s). During the 1820s there was an explosion in the number of newspapers published in Paris, and these papers fed the middle classes a diet of social statistics that acted as a barometer to predict unrest [15, p 106]. The penchant for murder implied that murder was a consequence of society, the forces that created the penchant were responsible and so the individual murderer could be seen as an ‘innocent’ victim of the ills of society.

Despite the public popularity of‘social physics’, Quetelet’s l’homme moyen was not popular with many academics. Quetelet had based this theory on an initial observation that physical traits, such as heights, seemed to be Normally distributed. The problem was that, apart from the fact that heights are not Normally distributed (the incidence of giants and dwarfs in the real population exceeds the expected number based on a Normal distribution of heights Quetelet was confusing ‘looks like’ with ‘is’), since murders and suicides are ‘rare’,there can be little confidence in the statistics, and many experts of the time, including Comte [3, p 39], rejected Quetelet’s theories on the basis that they did not believe that ‘laws of society’ could be identified simply by examining statistics and observing correlations between data ( [8, pp 47–48], [11, p76], [15, p 112]) and even Quetelet, later in life counselled against over-reliance in statistics [16].

Beyond these practical criticisms there were philosophical objections. The l’homme moyen was a ‘statistical’ composite of all society who was governed by Condorcet’s universal and constant laws. L’homme moyen was nothing like the Enlightenment’s l’homme eclaireé, the person who applied rational thinking to guide their action, thinking that was guided by science and reason and not statistics. The decline of Quetelet’s theorems in Europe coincides not just with the political stability of the Second Empire, but a change in attitude. The poor were no longer unfortunate as a consequence of their appalling living conditions, but through their individual failings, like drunkenness or laziness. The second half of the nineteenth century was about ‘self-help’ not the causality of ‘social physics’[15, p 113].

However, Quetelet’s quantitative methods would take hold in Britain. In 1850, Sir John Herschel, one of the key figures of the Age of Wonder, reviewed Quetelet’s works and concluded that the Law of Errors was fundamental to science [2, p184-185]. In 1857, Henry Thomas Buckle published the first part of a History of Civilisation in England, which was an explanation of the superiority of Europe, and England in particular, based on Quetelet’s social physics.  Francis Galton combined the work of his half-cousin, Charles Darwin, with that of Quetelet to come up with a statistical model of Hereditary Genius in the 1870s and in the process introduced the concepts of ‘reversion to the mean’ and statistical correlation. At the start of the twentieth century Galton’s statistical approach, was championed by Karl Pearson who said that the questions of evolution and genetics were “in the first place statistical, in the second place statistical and only in the third place biological” [8, p 147], and the aim of biologists following this approach was to “seek hidden causes or true values” behind the observed data processed with statistical tools [6, p 7].

In the late-nineteenth century the approach of these, predominantly, British biometricians collided, pretty much, head on with those that the monk, Gregor Mendel. In the 1860s Mendel looked at the mechanism of breeding hybrids and essentially developed a theory of how variation appears in living organism by experimenting on individual peas plants in his garden, rather than referring to population statistics. Mendel was interested in how does a microscopic effect, how two pea plants producing a hybrid, manifest itself at the macroscopic level, in statistical regularities, this is essentially a probabilistic, mathematical, approach: going from the particular to the general [9, pp 54-56].

The debate in biology between the biometric and Mendelian approaches was one about how to improve society through the process of heredity. If solved correctly, the social engineers of the late nineteenth century believed they could breed out laziness and drunkenness through the ‘science’ of eugenics. Could the secrets of heredity be discovered by observing statistical correlations, or did the solution lie in identifying the biological law [8, pp 145–152]. The biometric and Mendelian approaches were eventually reconciled by the “statistically sophisticated Mendelian”, Ronald Aylmer Fisher [8, p 149] in his 1930 book The Genetical Theory of Natural Selection and of whom Anders Hald has described as “a genius who almost single-handedly created the foundations for modern statistical science”.

Lots of people have suggested I read Alisdair MacIntyre’s After Virtue, which I recently attempted, but the book is ‘thick’ and I have resorted to Reading Alasdair MacIntyres After Virtue as a gentle introduction. MacIntyre’s thesis is that sometime in the eighteenth/nineteenth centuries Western philosophy lost its ability to address moral issues. Essentially, modern moral philosophy is a Nietzschean battle of wills, with opposing sides in a debate employing scientific authority and raw emotion to justify pre-determined political objectives (think climate science). (Lutz claims that) MacIntyre associates the origins of this failure are in Ockham’s Nominalism and the influence of eighteenth century Augustinian philosophers (Locke, Hume, Jansenists, etc.). This is immediately interesting to me as I think both themes are important, for different reasons.

What has particularly struck me is that Hume is presented as arguing that an individual can calculate what is in their best interests and hence choose a course of action to take, perhaps outside what is the moral norm. I need to explore this further, because it relates Hume’s ideas about individual autonomy to a belief in causal determinism: that the agent can relationally foresee the future. Also, Hume argued that reason was subservient to the passions, that is there are animal behaviours that will inevitably over-rule reason. I see this theme featuring in eugenics, sociobiology and even in the collection of articles Moral Markets, and is not a proven phenomena.

Oskar Morgenstern, and the Austrian economists more generally, was concerned with this problem. When Morgenstern was twelve, in 1914, his family moved to Vienna and, in his own words he was“deflected to social sciences by war [the First World War]; inflation and revolution in the streets, home difficulties but not by deep intellectual attraction” [14, p 128]. Morgenstern studied economics at Vienna, then dominated by Ludwig von Mises, gaining his doctorate in 1925. He then travelled to Cambridge and the United States, returning to Vienna for his habilitation thesis in 1928 was entitled Wirtshaftsprognose (‘Economic prediction’) and, in the Austrian tradition, rejected the use of mathematics in favour of a philosophical consideration of the difficulties of forecasting in economics when other agents are acting in the economy [12, p 51]. Following his habilitation, Morgenstern was appointed a lecturer at Vienna and then the director of the Vienna Institute of Business Cycle Research.

Unlike many of his economic colleagues, Morgenstern became involved in the Vienna Circle of mathematicians and philosophers, never as an active participant but as a bridge between them and economics [12, p 52]. In 1935 he presented a paper to the mathematicians associated with the Vienna Circle on the problem of perfect foresight. Menger often referred to an episode in Conan Doyle’s story‘The Final Problem’, which describes the ‘final’ intellectual battle between Sherlock Holmes and the fallen mathematician, Professor Moriarty which results in them both falling to apparent death in the Reichenbach Falls. At the start of the adventure Holmes and Watson are trying to flee to the Continent, pursued by the murderous Moriarty. Watson and Holmes are sat on the train to the Channel ports
[Watson]“As this is an express and the boat runs in connection with it, I should think we have shaken [Moriarty] off very effectively.”
“My dear Watson, you evidently did not realise my meaning when I said that this man may be taken as being quite on the same intellectual plane as myself. You do not imagine that if I were the pursuer I should allow myself to be baffled by so slight an obstacle. Why, then, should you think so lowly of him?”
For Morgenstern this captured the fundamental problem of economics. While Frank Knight had earlier realised that profit was impossible without unquantifiable uncertainty, Mogernstern came to think that perfect foresight was pointless in economics. If the world was full of Laplacian demons making rational decisions then everything would, in effect, grind to a halt with the economy reaching its equilibrium where it would remain forever. Morgenstern writes
always there is exhibited an endless chain of reciprocally conjectural reactions and counter-reactions. This chain can never be broken by an act of knowledge but always through an arbitrary act — a resolution. [14, quoting Mogernstern on p 129]
The imp of the perverse would confound Laplace’s demon by doing something unexpected, irrational and inspired. It was this feature of the social science that makes it fundamentally different from the natural sciences, since physics is never perverse.

MacIntyre’s alternative to the bun-fighting that typifies modern moral philosophy is a return to an Aristotelian approach characterised by public deliberation considering not only means to predetermined ends but also what ends are in the ‘public good’. This, at first sight, appears to be a Pragmatic argument. However, I think there must be a difference because of MacIntyre’s apparent criticism of Ockham’s Nominalism. I think this might be important, specifically I have always associated Ockham’s Nominalism with his ‘doubt’ in the human ability to understand God, i.e. it is related to uncertainty. My initial thoughts are at the root of MacIntyre, as at the root of Aristotle, is Realism.

What interests me is that the idea that reciprocity is at the heart of Financial Mathematics is unorthodox today, but it would have been orthodox in the eighteenth century. Academic debate in the eighteenth century centred on (semi) public deliberation, at the Academies, salons or the monthly meeting of the Lunar Men. This approach disappears in the first half of the nineteenth century; individual scientists retreat to the lab, and perform magical experiments to the public, or embark on journeys of exploration enduring a process that moulds their genius: Alexander Humboldt, Darwin, Lewis & Clark. Around the same time, Laplace's causal determinism manifests itself in social science as social physics, with people being `governed' by the Normal distribution. Just as today businesses are applying Bayesian statistics to make decisions for us, taking away the requirement to discuss and the ability to act arbitrarily.


References

[1] M. Blaug. Economic Theory in Retrospect. Heinemann, second edition, 1968.
[2] S. G. Brush. The Kind of motion we call heat: A history of the kinetic theory of gases in the 19th century. North-Holland, 1976.
[3] I. B. Cohen. Revolutions, Scientific and Pobabilistic. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[4] L. J. Daston. D’Alembert’s critique of probability theory. Historia Mathematica, 6:259–279, 1979.
[5] L. J. Daston. Classical Probability in the Enlightenment. Princeton University Press, 1998.
[6] G. Gigerenzer. The Probabilistic Revolution in Psychology - an Overview. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[7] G. Gigerenzer. Probabilistic Thinking and the Fight against Subjectivity. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution:Volume 2: Ideas in the Sciences. MIT Press, 1987.
[8] G. Gigerenzer. The Empire of Chance: how probability changed science and everyday life. Cambridge University Press, 1989.
[9] R. M. Henig. A Monk and Two Peas: The Story of Gregor Mendel and the Discovery of Genetics. Phoenix, 2001.
[10] G. Jorland. The Saint Petersburg Paradox 1713 – 1937. In L. Kruger, L. J. Daston, M. Heidelberger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[11] L. Kruger. The Slow Rise of Probibalism. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[12] R. J. Leonard. Creating a context for game theory. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 29–76. Duke University Press, 1992.
[13] P. Mirowski. More Heat than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge University Press, 1989.
[14] P. Mirowski. What were von Neumannn and Morgenstern trying to accomplish?. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 113–150. Duke University Press, 1992.
[15] A. Oberschall. The Two Empirical Roots of Social Theory and the Probability Revolution. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[16] O. B. Sheynin. Lies, damned lies and statistics. NTM Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin, 11(3):191–193, 2001.

5 comments:

  1. One of the questions I had after reading your paper on Reciprocity in Financial Economics is why you chose Pragmatism instead of Scientific Realism? Do you think you're heading that way, now?

    ReplyDelete
  2. I heard about Pragmatism through a comment Donald MacKenzie made to me, almost in passing. The basic issue was how to incorporate ethics into 'science', and this can be achieved with Pragmatism. If you are a Realist in ethics there is an association with divine promulgation, this is difficult for me as an atheist. Roger Penrose is the leading Scientific Realist in the UK (IMHO) and he is agnostic/theistic.

    More practically, Realism expects science to 'converge' on the Truth, I think this is untenable in a financial context. I think finance is scientifically important in providing stress tests to theories of the philosophy of science. The canonical example is the pre 1987 assessment of Black-Scholes as the most successful equation in finance (if not economics) [Stephen Ross, 1987, in "Finance" the Palgrave Dictionary of Economics]. The markets converged to the scientific theory, which was diverging from market reality.

    What I am interested in is how science can respond to situations governed by radical uncertainty and non-ergodic systems. I am not convinced Scientific Realism can accomplish this. I take a Pragmatic approach because of its denial that the scientist can be an objective observer and the consequential emphasis that is placed on problems of uncertainty.

    ReplyDelete
  3. Thanks, Tim.

    Another question I had was about Virtue Ethics & Pragmatism which relates to the first para of your reply. I can't really get see how these two marry up.

    Anyway, that'll probably take you too far from the theme of this post. You mention Hume's passion Vs reason. I keep coming back to that too. Although our thinking is probably very far apart on it. I'm a Freud fan, so his explanation of 'passion' and how that has created the psychological & social structures in which we exist greatly influences my view. I guess in part (although I'm still conflicted) my Freudianism pushes me towards Realism over Pragmatism at the moment - I believe there are structures & energies underlying our perceptions of reality that are "really real".

    Morality in practical terms is still tricky, though. Freud's conclusion, as far as day-to-day life goes, was simply that morality is 'self-evident'. Which is not hugely helpful.

    ReplyDelete
  4. On the semantic level, the entomology of ethics is from the Greek for 'habit' while pragma is the Greek for 'practice'. Both Virtue (Aristotelian) ethics and Pragmatism are rooted in practice rather than theory.

    Virtue Ethics and Pragmatism are concerned simultaneously with means and ends (i.e. ends never justify means)

    Both approaches place emphasis on deliberation (i.e. society agrees what is 'good', it is not fixed externally).

    The argument that morality is 'self evident' is maybe Virtuous/Pragmatic, in that e.g. MacIntyre or a Pragmatist might argue people generally know what is right or wrong without having to have a theory about it.

    That you believe there are things that are 'really real' is fine, Ockham would have argued that God is really real, but he was also a Nominalist.

    this stuff is really slippery and I am not an expert.

    ReplyDelete
  5. I enjoy reading your posts, both because and despite how much ground you cover.

    I don't know why causal determinism is so seductive, and why we (on average, so to speak) are so quick to embrace logic that takes away our ability to act arbitrarily. Economics would be so much more fun, and useful, if less time was spent analyzing past relationships, and more time dreaming and imagining some arbitrarily improved future economy.

    In that respect, I appreciate the way you walk through the history of an idea and reflect on both the successes and the flaws that underpin how we understand the idea today. So much more engaging than asserting by the authority of past greatness that an idea is the truth.

    ReplyDelete

Note: only a member of this blog may post a comment.