The context is as follows. Students learn that if a market is arbitrage free then asset prices today are simply expectations of future asset prices calculated using, special, risk neutral probabilities, rather than any “natural” probabilities. The idea of a probability as an abstract measure is introduced along with the idea of equivalent measures, so the students understand that there is nothing special about, so called, natural probabilities. They are then shown that if a market is complete (idealised) there is a unique risk neutral probability, while if the market is incomplete (for example, it includes transaction costs) then there are an infinite number of possible choices of risk neutral probabilities. Finally, in a complete market, all risk can be removed in pricing derivatives; the same is not true for incomplete markets, which are the reality.
The next question is, how do we choose what risk neutral probabilities we should use to price assets in a market if there are infinitely many choices? A number of solutions have been presented, the most widely accepted is to use utility-based methods, an approach initiated by Prof Mark Davis in his paper Option Pricing in Incomplete Markets (1998). Essentially, given an agent’s utility function, we can identify the correct risk neutral probabilities to use in asset pricing.
Why is this significant in economics? Von Neumann-Morgenstern Expected Utility Theory, published in 1944, states that given a set of natural probabilities associated with lotteries (generating a probability distribution function) an agent’s utility function can be identified (preferences over distribution functions precede preferences over outcomes). However, von Neumann-Morgenstern Expected Utility assumes probabilities are objective and exogenously defined by nature (the Kolmogorov formulation of probability was not accepted in the US until around 1948). Derivative pricing tells us to ignore these probabilities and price assets using equivalent risk neutral probabilities. However, in realistic markets, there are infinite choices of the correct probabilities to use and we should use utility functions to identify the right one. Pricing assets starts with utility functions. The implication is, we need to understand utility to understand asset pricing; finance needs to be interdisciplinary.
Within the banks, with probability 1, there are scientists who are comfortable with probability as measure. I would be willing to wager that there are few people managing banks who are familiar with the idea that financial mathematics tells us that there is no unique probability measure under which to calculate expectations, rational or not.
Another topic I raise with students is why Pythagoras was responsible for the Black Monday in 1987. The context here is that the ’87 crash was associated with portfolio insurance, based on hedging investments in portfolios by taking out positin in index futures. I have to teach the students (because it is part of the Actuarial syllabus) a the optimal hedge ratio. This is derived on the assumption that investors choose between expected returns and variance of returns. This idea was introduced by Markowitz (mean variance portfolio selection) and developed by Sharpe, Litner (the capital asset pricing model) and yielded a handful of Noble Prizes. Why did Markowitz suggest this approach? There is no evidence to suggest that investors actually do choose between expected returns and variance of returns, Markowitz suggested it because, thanks to Pythagoras’s theorem, comparing expectations and variances is mathematically tractable.
Markowitz is a nice example of mathematics driving financial theory, rather than, as happens in the natural sciences, the observations driving the development of mathematics. Newton developed calculus to build his models. I tell my students that we are in an exciting time in financial maths and compare the state of the science to that of aviation around 1904. We are using glue and tying things together with string, but we are on the right lines. Much of finance theory was developed in the 30 or so years following the Second World War based on assumptions of probability that date from the seventeenth and eighteenth centuries. Using measure theoretic probability, scientists are able to look at financial problems without the constraints of nineteenth century mathematics and as a result, are able to build more accurate, though still simple, economic models.
The following is a personal selection of papers that I feel are significant contributions to finance theory from mathematics
Fernholtz, R., Karatzas, I, Stochastic Portfolio Theory: an Overview
This addresses issues relating to Markowitz style portfolio selection
Musiela, M., Zariphopulou, T., Portfolio choice under space-time monotone performance criteria
This addresses a significant issue in classical finance. At time t=0 the agent sets their objectives at some time t=T>0. In the interval ]0,t[ the agent locks themselves in a room, closes their eyes and sticks their fingers in their ears. The reality is that in ]0,T[ the investor is affected by the economy. Energy managers (a PhD in maths and a PhD in theoretical physics) at Scottish Power recently raised this issue with me, they need to manage portfolios dynamically, not statically.
Hugonnier, J., Kramkov, D. Schachermayer, W., On Utility-Based Pricing of Contingent Claims in Incomplete Markets, Mathematical Finance, Vol. 15, No. 2, pp. 203-212, April 2005
A refinement of Prof Davis's work.
H. Jin and X. Zhou, Behavioural portfolio selection in continuous time Mathematical Finance, Vol. 18 (2008), pp. 385-426.
Provides a mathematical basis for behavioural finance, as introduced by Kahneman and Tversky.