No, not skinny models, mathematical models. Katrina Lamb writes:
As I see it, the problem with the financial market meltdown is not that David Li published an article in the Journal of Fixed Income Securities on the Gaussian copula function, or even that in his article Li, then an analyst with JPMorganChase, identified the price of credit default swap (CDS) contracts as a seemingly elegant proxy for the mortgage market - a proxy that greatly reduced the immense complexity of modeling values and risks in this market but, as it turned out, lost a great deal of critically important information along the way. No - the real problem was with the incremental decisions practitioners made to adopt this model wholesale, to leverage it up to 50 or more times the worth of the underlying assets, and ultimately to heedlessly employ it as a path to untold riches. In other words it was the people who used the model, not the model itself. It was the rating agencies who, in conferring the AAA ratings without which the securities would have never been as widely distributed as they were, assumed that housing prices would never go down. It was the investment bankers who successfully shouted down the warnings of their internal credit risk departments so that they could sell ever higher volumes of CDOs, with ever-higher levels of leverage, in order to maximize their year-end bonuses....A model did not take down Wall Street. Models do not "screw up" - they do exactly what they are supposed to do once they have their inputs. The screw-ups occur solely in our application of models to inappropriate situations or to situations which we do not fully understand. The predictions may not reflect reality outcomes as precisely as we wish, but that possibility of error needs to be accounted for by the ultimate decision makers. The output of a model should be an input in any decision process, not the entire decision process.
It never ceases to amaze me the extent to which some math coupled to software can cause people to shut their brains off. Mathematics is a language, and fluency matters. But it's only trillions of dollars...
- Log in to post comments
It never ceases to amaze me the extent to which some math coupled to software can cause people to shut their brains off.
True enough, but I think it goes further than this. Everybody on Wall Street wants to pretend that what happened is Not Their Fault. The geeks and their esoteric models make easy scapegoats, because the idiots who levered up at more than 30:1 can claim (and there is some truth to the claim) that they didn't understand what the models were doing. But for a while the models were telling those idiots exactly what they wanted to hear, i.e., "Do this esoteric thing and score billions of dollars in profits for the firm." They didn't want to know that the models had limits as long as the models were working.
If you don't understand the implicit assumptions of a model, you are likely to get things disastrously wrong when you try to use the model. That's how it works in physics (fortunately, the consequences are much less severe), and it's no surprise to me that the same thing happens in the financial world.
The problem is not models themselves, but the fact that they tend to be treated as fact, instead of informational tools. Especially when they venture into areas that are basically extrapolatory (is that a word?) where we have neither the detailed understanding of mechanism, nor the benefit of correction based on experience.
But no one outside of the financial world would would ever believe that kind of model, would they?
a critical factor seems to have been that senior managers had no intuitive understanding of what the numbers meant, and when the models predicted ridiculously small risk for some things this was not treated as a red flag.
AIG was treating CDS on high rated companies as if the mean time between failure was of order a millennium, which is about an order of magnitude too low a risk estimate.
The people in charge either didn't understand or didn't care.
If the pointy-headed peons aren't at fault, we might have to consider the possibility that the aristocrats are, which would be unthinkable...
One of my favorite quotes:
"All models are false, but some models are useful"
One way in which this is true: all models make assumptions and it is the duty of the model user to constantly check the assumptions.
"It never ceases to amaze me the extent to which some math coupled to software can cause people to shut their brains off. "
I don't think the problem is people turning their brains off. If your goal was to make a huge pile of money through bonuses, promotions and getting into the next big thing early, accepting a model which let you do what you wanted was good thinking. That the model would result in the mid-term destruction of the financial system didn't mean that you had to return your salary and bonus. No one had to shut his or her brain off. Anyone with half a brain and a similar opportunity and a reasonable fondness for money would have made the same decisions.
The problem is not models - it's the vast quantities of free money being pumped into the markets by people forced to gamble their retirement savings on the market. A gift from the government to the investor class.
It's inherently impossible for everyone to get rich on the market. It's equally impossible for an entire society to save for retirement by doing so. The whole thing was always a scam.
Models don't kill economies, people kill...wait, where have I heard that before?
Isn't there something inherent in models that just asks for misuse? It's like excusing the flamethrower for over-cooking your burger. Isn't the whole point of models to remove the drudgery of thinking and come out with something that looks and smells like meaningful results?
And have you ever tried arguing with a model? "But at least it's got answers," they'll say. "It's got colorful charts and pretty 3-dimensional plots. We don't see your answers, your charts, your pretty 3-dimensional plots," they'll say. "Come back and see us when you've got some answers." OR "Where exactly is it wrong? If you tell us what coefficient to tweak, what value to use for x, what boundary condition to change we'll happily oblige. We know it's taken a team of three a year to build, but surely you can give us something in the next ten minutes."
In all fairness it's not "their" fault. Since a model is just providing permutations of "stuff we already know" it's bound to put out results that look familiar and reasonable (or get tweaked until it does) and when the answers coincide so closely with what "they" want to be true, the answers and thus the model become nearly irresistable. (Also in all fairness: This phenomena is not foreign to the laboratory.)
In my 18 years in academia and industry (mech/aero eng.) I've never met a model that wasn't being misused by someone (and many many someones who should've known much much better...it's almost like they couldn't help themselves).
Fortunately, in my fields of endeavor, we've always tested before anyone got hurt.
A flamethrower? You mean, all through the 1990s and 2000s, it was cool and trendy to cook burgers with flamethrowers? No wonder I couldn't fit in. I was cooking mine with hand grenades...