Wednesday, 25 February 2009
1202 Leonardo the Pisan (Fibonacci) introduces Hindu-Arabic numbers to Europe to aid in commercial calculations.
c 1250 Pope Innocent IV, in a commentary on cannon law, justifies the charging of risk premium for assets (Murray Rothbard, Economic Thought Before Adam Smith , Edward Elgar,1996)
c 1260 St Thomas Aquinas endorses insider trading (making profits based on information not know to the buyer) (Summa Theologica, Second part of the second part, 77, 3)
c 1564 Girolamo Cardano identifies the concept of mathematical probability.
1610 Galileo becomes the first “quant”. Having used a telescope to observe Jupiter, Galileo publishes his results, leaves Padua, and becomes “First and Extraordinary Mathematician of the University of Pisa and Mathematician to his Serenest Highness Cosimo II de Medici”. At Cosimo’s request, he investigates a gambling problem and publishes Sopra le Scoperte dei Dadi (Upon the Discoveries of Dice).
1654 The first derivative pricing formula is developed by Pascal and Fermat in answering the Problem of Points. The solution to the problem of points is essentially the same as the Cox-Ross-Rubenstein model. A special case of the discrete time CRR model converges to the continuous time Black-Scholes option pricing model.
1696 Newton becomes a quant, moving from Cambridge to the Royal Mint. In 1717 Newton moves England from the silver standard to the gold standard, fixing the price of silver to gold.
1738 Daniel Bernoulli publishes “An Exposition on a New Theory of the Measurement of Risk” in Russia. The manuscript appears to have been lost, reprinted in Germany in the late 19th century and then in English in 1954, 10 years after von Neuman and Morgenstern had introduced Expected Utility in the Theory of Games and Economic Behavour.
1860-1926 Maxwell develops the kinetic theory of gases, investigated by Einstein who discusses Brownian motion, which is defined mathematically by Weiner in 1926. In the late 1960s Robert Merton develops the theory of continuous time finance based on the Weiner process.
1933 Kolmogorov identifies probability with measure, enabling financial mathematicians to use the ideas of conditional probability and equivalent measures, which are the fundamental tools of derivative pricing.
The context is as follows. Students learn that if a market is arbitrage free then asset prices today are simply expectations of future asset prices calculated using, special, risk neutral probabilities, rather than any “natural” probabilities. The idea of a probability as an abstract measure is introduced along with the idea of equivalent measures, so the students understand that there is nothing special about, so called, natural probabilities. They are then shown that if a market is complete (idealised) there is a unique risk neutral probability, while if the market is incomplete (for example, it includes transaction costs) then there are an infinite number of possible choices of risk neutral probabilities. Finally, in a complete market, all risk can be removed in pricing derivatives; the same is not true for incomplete markets, which are the reality.
The next question is, how do we choose what risk neutral probabilities we should use to price assets in a market if there are infinitely many choices? A number of solutions have been presented, the most widely accepted is to use utility-based methods, an approach initiated by Prof Mark Davis in his paper Option Pricing in Incomplete Markets (1998). Essentially, given an agent’s utility function, we can identify the correct risk neutral probabilities to use in asset pricing.
Why is this significant in economics? Von Neumann-Morgenstern Expected Utility Theory, published in 1944, states that given a set of natural probabilities associated with lotteries (generating a probability distribution function) an agent’s utility function can be identified (preferences over distribution functions precede preferences over outcomes). However, von Neumann-Morgenstern Expected Utility assumes probabilities are objective and exogenously defined by nature (the Kolmogorov formulation of probability was not accepted in the US until around 1948). Derivative pricing tells us to ignore these probabilities and price assets using equivalent risk neutral probabilities. However, in realistic markets, there are infinite choices of the correct probabilities to use and we should use utility functions to identify the right one. Pricing assets starts with utility functions. The implication is, we need to understand utility to understand asset pricing; finance needs to be interdisciplinary.
Within the banks, with probability 1, there are scientists who are comfortable with probability as measure. I would be willing to wager that there are few people managing banks who are familiar with the idea that financial mathematics tells us that there is no unique probability measure under which to calculate expectations, rational or not.
Another topic I raise with students is why Pythagoras was responsible for the Black Monday in 1987. The context here is that the ’87 crash was associated with portfolio insurance, based on hedging investments in portfolios by taking out positin in index futures. I have to teach the students (because it is part of the Actuarial syllabus) a the optimal hedge ratio. This is derived on the assumption that investors choose between expected returns and variance of returns. This idea was introduced by Markowitz (mean variance portfolio selection) and developed by Sharpe, Litner (the capital asset pricing model) and yielded a handful of Noble Prizes. Why did Markowitz suggest this approach? There is no evidence to suggest that investors actually do choose between expected returns and variance of returns, Markowitz suggested it because, thanks to Pythagoras’s theorem, comparing expectations and variances is mathematically tractable.
Markowitz is a nice example of mathematics driving financial theory, rather than, as happens in the natural sciences, the observations driving the development of mathematics. Newton developed calculus to build his models. I tell my students that we are in an exciting time in financial maths and compare the state of the science to that of aviation around 1904. We are using glue and tying things together with string, but we are on the right lines. Much of finance theory was developed in the 30 or so years following the Second World War based on assumptions of probability that date from the seventeenth and eighteenth centuries. Using measure theoretic probability, scientists are able to look at financial problems without the constraints of nineteenth century mathematics and as a result, are able to build more accurate, though still simple, economic models.
The following is a personal selection of papers that I feel are significant contributions to finance theory from mathematics
Fernholtz, R., Karatzas, I, Stochastic Portfolio Theory: an Overview
This addresses issues relating to Markowitz style portfolio selection
Musiela, M., Zariphopulou, T., Portfolio choice under space-time monotone performance criteria
This addresses a significant issue in classical finance. At time t=0 the agent sets their objectives at some time t=T>0. In the interval ]0,t[ the agent locks themselves in a room, closes their eyes and sticks their fingers in their ears. The reality is that in ]0,T[ the investor is affected by the economy. Energy managers (a PhD in maths and a PhD in theoretical physics) at Scottish Power recently raised this issue with me, they need to manage portfolios dynamically, not statically.
Hugonnier, J., Kramkov, D. Schachermayer, W., On Utility-Based Pricing of Contingent Claims in Incomplete Markets, Mathematical Finance, Vol. 15, No. 2, pp. 203-212, April 2005
A refinement of Prof Davis's work.
H. Jin and X. Zhou, Behavioural portfolio selection in continuous time Mathematical Finance, Vol. 18 (2008), pp. 385-426.
Provides a mathematical basis for behavioural finance, as introduced by Kahneman and Tversky.
Friday, 13 February 2009
Wednesday, 11 February 2009
Hector Sants, the Chief Executive of the FSA, said in October that he had over thirty years experience in banking. That is nice to know, but if I owned a jet aeroplane in 1950 I might prefer a mechanic who has three years experience of jet engines to one with thirty years in aviation. This is the root of the problem; finance is developing too fast for people with thirty years experience. As a financial mathematician, researching and teaching how you make good, scientific, decisions in the uncertain world of finance, I am not overawed by the complexity of the financial products the banks are trading. I am stunned by the simplicity of the mathematics that many bankers were using to manage these products. Nuclear power plants are complex, and so we expect the engineers who operate them to use cutting edge technology. The human body is complex, we expect doctors to use up to date research in treating us. In investment banking, Collateralised Debt Obligations are complex but the reaction of the "quants" designing them was to dumb down the maths so that the managers, who did not have the professional qualifications of an engineer or a doctor, could keep up.
What happened in the lead up to the credit crisis is not difficult to explain. When I was in my late teens, l embarked on a short-lived experiment in producing ginger beer. Having brewed the beer in a large bucket, I bottled it in twenty-four bottles and stored them in my wardrobe. If I had presented a newly sealed bottle to an independent expert and asked them to asses the probability of the bottle exploding, they may have given the chance as one in twenty four, and so I could expect one of my twenty four bottles to explode. If I was concerned with losing many bottles, I could do some basic maths and conclude that there was a 99.99% chance that I would not lose more than five bottles.
A few weeks later one bottle exploded, followed by the remaining 23 over the course of a few days. When I had done my sums, I had treated each bottle as being independent of the others. Of course, since each bottle contained the same product, if one exploded, they probably all would. This was an object lesson in what are called dependent risks, essentially there was a one in twenty four chance of losing everything rather than a near certainty of only losing a little.
Investment banks had bought beer, sub-prime mortgages, and sold crates of bottles to investors on a sort of "money back guarantee" deal. Like some of Sir Alan's naive apprentices, they focused on turnover, since salesmen are paid on commission, rather than worrying about profitability.
Bankers, not just in the UK but all around the world, blame "nature" for mis-behaving when all the bottles exploded. This is a smokescreen to protect their reputation. Since 2000, there have been a series of concerns raised by mathematicians as to whether the equations bankers were using to manage the risk of their investments were adequate. In particular, that the key equation they use would underestimate the number of bottles that would blow up, and another, that valued their finished products would over re-act to a bottle being lost and the value of all the beer, not just the ginger beer, they were selling would collapse. The problem is, not only did bank management not understand this, but also, it is exactly what happened.
Thursday, 5 February 2009
The storm clouds of the credit crisis broke following BNP Paribas’s announcement that it was unable to value assets on 9 August 2007. The following day, Goldman Sachs reported that one of its funds had lost 30% of its value in the week, equiring a, now paltry, $3 bn bailout. In explanation the Chief Financial Officer of Goldman Sachs said
“We were seeing things that were 25-standard deviation moves, several days in a row. There have been issues in some of the other quantitative spaces. But nothing like what we saw last week.”Essentially, his point was that the economic environment had been so perverse that the losses, incurred by the most respected investment bank in the world, were inevitable. The Financial Times2, along with much of the established financial press, took this as a reasoned explanation but, almost immediately, commentators in the blogosphere revealed it to be what it was, rubbish. Goldman’s models were wrong, not nature.Financial Times, August 13 2007, Goldman pays the price of being big
It is unlikely that a science journalist would have accepted the pseudo-scientific explanation Goldman’s offered. Would they have done a better job understanding the crisis as a whole?
Since the 1960s investment banks have been using increasingly complex mathematics, literally rocket science, to price and manage the risks of the assets they trade. The current credit crisis is unique in that for the first time, quants, physicists and engineers, using mathematical formulae rather than economists using their knowledge of markets, have been managing the assets at the heart of the crisis. Since experienced managers in banks have been at a loss to grasp the mathematics, it is hard to see how a financial journalist should be expected to do so. However, a journalist who is accustomed to investigating safety critical systems in technology is well capable of coming to terms with risk management systems in an investment bank. While a journalist who is able to explain the latest research coming from a physicist in a research lab, would be able to describe the pricing algorithm developed by a physicist in a bank.
If the types of journalists investigating science turn their attention to banks, would the risk of future crises be reduced? There is a strong argument that it would. A key role that science-reporting plays is in ensuring that technology is not misused and that it is properly regulated. Medicine is better off for the reporting of Thalidomide and the environment benefits by journalists scrutinising the energy industry.
In the 1990s, central bankers became concerned that the rules for defining how much capital a bank held in reserve were quickly becoming redundant, given the explosive growth of sophisticated derivative products. In 1996, the Bank for International Settlements, the gnomes of Basel who regulate the banks in relation to the credit crisis, began a process of re-writing banking regulation, called Basel II. Realising that they would never be able to keep up with the activities of the banks, the regulators specified a framework for regulation rather than a set of hard and fast rules. The framework, which is only now beginning to be implemented, is based on three pillars; the minimum standards for calculating how much money a bank needs to keep in reserve; the supervisory review process in the bank overseeing those calculations and market discipline ensuring the reviews and calculations are adequate.
In 2000, a mathematician working at the US Federal Reserve, checked the formulae used in Basel II to calculate capital reserves, but raised concerns
The Basel II accord still uses the simplified formula, because the accord is clear; under the supervisory review process, a bank needs to match the calculation of capital reserves to the omplexity of their operations. The accord expects that banks will adequately execute the supervisory review process because of market scrutiny. Consequently, informed journalism plays a fundamental role in the new framework for banking regulation. Science journalists should take up this role given their knowledge and skills.
The Public Relations machines of banks are happy that discussion around regulation focuses on issues such as bonuses and banning complex derivatives, because the banks know that governments will not set such regulations. Turning the public’s attention to the science underpinning the bank’s business will not only put the focus on where the problem is, but is already part of the regulatory framework. In addition, putting banking managers under pressure to explain the details of the technology they are using will force better communication, debate nd understanding within the banks bridging the gap in banks between the quants and conventional financiers.
Finally, if the failure, because of poor use of science, of a section of the energy or pharmaceutical businesses had lead to a trillion-dollar rescue package, it is inconceivable that the managers of he industry would still be in place. Why have so few bankers been fired by their shareholders? Is it because the bankers have successfully blinded the shareholders with pseudo-science?
In a week that has seen the disappearance from the financial landscape of such venerable firms as Lehman Brothers, Merrill Lynch and the Bank of Scotland it is inevitable that people question the competence of bankers. Along with many financial mathematicians, in both the financial industry and academia, I take the view that if you give people who have only ever driven bullock carts, Ferraris, they are likely to end up in a ditch. The people running the banking industry have not had a firm grasp of the technology underpinning their business.
Financial mathematics supports the practical disciple of constructing financial products in the same way that mechanics and thermodynamics supports motor engineering. Derivatives, the products that financial mathematicians construct, transfer risk in financial markets just as engines transfer energy in the physical world. Fixed rate mortgages are a popular product of financial mathematics, the borrower passes the uncertainty in the future interest rate to the lender, who takes it on for a fee. Just as in any industry, most people working in the financial markets are involved with the sales and servicing of products and only a few people actually design, and so have a deep understanding of, the products themselves. Jeremy Clarkson does not need to understand automotive engineering to be considered an expert on cars.
It is not surprising that many bankers did not understand the science they work with, because financial mathematics is literally rocket science; it comes directly from the mathematics developed in the fifties and sixties to control rockets. The international investment banks are well aware of this and they are becoming less interested in recruiting graduates with a degree in economics from the LSE, who understand theory, and are more interested in engineers, physicists or mathematicians, who can do the complex calculations underpinning derivatives. This is nothing new, for around a decade the most popular destination of physicists on completing their PhDs has been The City and more graduate engineers in the UK now head for careers in finance than in engineering.
Although young British scientists can contemplate fat salaries with investment banks, they are at a disadvantage. If you visit the sharp end of a City firm, you will notice that there are a large number of continental graduates working there. This is no accident; continentals have a much better education in probability than UK graduates. The reason for this is cultural. The UK was at the heart of the development of statistics in the nineteenth century but at that time most mathematicians thought statistics heresy, because it is inductive rather than deductive, and today most UK universities still have separate maths and statistics departments. Seventy-five years ago this summer, a young Russian mathematician Andrey Kolmogorov defined probability in terms of rigorous mathematics placing probability firmly in the mathematics syllabus taught at school and through university in the rest of Europe.
There is a conceptual gulf between British and French graduates. When using probability to think about the future, English speakers will talk about an expectation, the French talk about an esperance, which is associated with 'hope'. The British and American bankers expect something, the French hope for it. The bankers creating asset-backed securities believed that they had removed risk because they lacked a real understanding of probability. If we want to maintain our position in the lead of finance, we need to train more people in probability so that they are better able to deal with an uncertain future and look after our money.