Value At Risk
One of the things I love about blogs is that they allow people who really know what they’re talking about to respond, publicly, to what they read, and to do so almost instantaneously, so that the rest of us can benefit. There’s a wonderful example today. It starts with a long NYT article by Joe Nocera on a risk management tool called ‘Value at Risk’, or VaR.
“Built around statistical ideas and probability theories that have been around for centuries, VaR was developed and popularized in the early 1990s by a handful of scientists and mathematicians — “quants,” they’re called in the business — who went to work for JPMorgan. VaR’s great appeal, and its great selling point to people who do not happen to be quants, is that it expresses risk as a single number, a dollar figure, no less.”
If you want to understand the risk management part of the financial meltdown, it’s worth reading the article in its entirety, in order to see how what started out as a tool for measuring certain types of risk ended up as a tool used by regulators and in reports, and then as a measure that people started to game, and that other people placed altogether too much confidence in:
“There were the investors who saw the VaR numbers in the annual reports but didn’t pay them the least bit of attention. There were the regulators who slept soundly in the knowledge that, thanks to VaR, they had the whole risk thing under control. There were the boards who heard a VaR number once or twice a year and thought it sounded good. There were chief executives like O’Neal and Prince. There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time.”
However, you should then read Yves Smith’s takedown of the article here. She argues, basically, that VaR is much more deeply flawed than Nocera lets on, and systematically underestimates risk in well-known ways. (Technically: it assumes a normal distribution, and the distribution of asset prices is known not to be normal, and to be abnormal in ways that make them riskier.) As far as I can tell, the reason it’s used anyways is (in part) because the mistaken assumptions it uses make the math more tractable. But that’s a classic and obvious mistake: like a drunk looking for the keys he dropped when he got out of his car under the light across the street because that’s the only place where he can see what’s on the ground.
James Kwak then chimes in with a different fundamental problem with VaR: the fact that it assumes that the world (or at least the world of asset prices) does not change in fundamental ways. As Kwak puts it, are asset prices like coin tosses, which you can safely assume will continue to show the probability distribution they’ve showed in the past? Or are they like games between two basketball teams, where the probability of one winning changes dramatically the day it drafts Michael Jordan? VaR assumes the answer is: like coin flips.
As best I can tell, Smith’s and Kwak’s critiques do not imply that VaR, and how it was used, are not screwed up in the ways Nocera claims; just that there are additional, more fundamental problems with it. But as a primer on financial risk management over the past decade or so, the combination of the three is hard to beat.