Loaded words and modeling

Here is a nice review by Burton Malkiel of “Models Behaving Badly” by Emanuel Derman. The models of the title are from the world of finance: how are assets priced?

I am a layperson to the world of finance, so I find it difficult how to apportion “blame” for financial crises on faulty models or fraudulent inputs to them. Certainly history is littered with financial crises, so the influence of modern modeling alone cannot explain everything.

In any case, I just want to take this opportunity for a small lament that the beautiful act of modeling must be dragged through the mud by a financial crisis in this way. It would be fair to say that I am almost fanatical about the virtues of the concept and practice of modeling. I believe that modeling is inescapable. The world is complicated. Our senses deliver so much information, our mental apparatus must work so hard, that to process the world around us is to model. It is too much to ask that we understand everything; we have to understand a version of everything that is not so complex as the world.

This is also why economics works with models. We don’t have a scale replica of the world that we can play with to see how this affects that. We have to build a scale replica from scratch, using our best judgment to push insistently at the boundary between complexity (so that we can understand our model) and usefulness (so that we can make something from it).

In a way we are much luckier in economics than in finance. Progress in economic theory comes as our models are improved upon and refined, but we are more able to iterate forward because our models are not embedded in a Leviathan global finance industry that depends on their continued function. Creative destruction of old models is hard when the house comes down with them.

With all this in mind I want to highlight this passage from the review:

He sums up his key points about how to keep models from going bad by quoting excerpts from his “Financial Modeler’s Manifesto” (written with Paul Wilmott), a paper he published a couple of years ago. Among its admonitions: “I will always look over my shoulder and never forget that the model is not the world”; “I will not be overly impressed with mathematics”; “I will never sacrifice reality for elegance”; “I will not give the people who use my models false comfort about their accuracy”; “I understand that my work may have enormous effects on society and the economy, many beyond my apprehension.”

How many of these will I accept for economics? Certainly the first; the model is not reality. Certainly the second; math is helpful in model-building but is not the point of model-building. The fourth and fifth are hard to argue with.

The third I don’t like. Everything that we do must sacrifice reality. The test of a model is not its realism (a realistic model airplane would be no fun at all). All models are unrealistic because all models are wrong. Of course elegance is not the test of a model either, except that an elegant model is one that illuminates a relationship in a clear way by cutting to the heart of what matters.

Anyway, the point is that I think that “model” is not a dirty word. I feel possessive about “modeling” much the same way as I feel possessive about “rationality” – what they mean to me is important and wonderful and I hate to see them sullied by misrepresentation that stems from their overlap with the real-world ideas of modeling and rationality. I wish that all of the things like these could have their own words that are not borrowed from natural language.

Abstract revolutions or: if something is new, the old thing must be bad

There is a part of this small, otherwise enjoyable article about the advent of neuroeconomics that bothers me. The premise of the piece is that neuroeconomics is “seeking a physical basis for [economic theory] inside the brain”. This is a field that is certainly sexy and possibly exciting; a while back I argued that it is a field that in some sense rediscovers the absolute primacy of “preferences” as the keystone of economic theory. I said:

The potentially exciting thing about neuroeconomics is that, even allowing for inexactness, it might tell us more about the actual hedonic motivators of people. Ambitious, yes, but not unimaginable. Of course, to an economist who wasn’t under the mistaken impression that simplified preferences are supposed to be realistic, it might just amount to saying “your simplification is a simplification”, which is slightly less exciting news. Or not news at all.

OK. In today’s article, we learn that 

modern economic and financial theory is based on the assumption that people are rational, and thus that they systematically maximize their own happiness, or as economists call it, their “utility.”

Since it’s hard to figure out what is going on in people’s heads, the argument continues, we employed an idea called revealed preference, the reconstruction of unobserved objectives from observed choice. Neuroeconomics, the claim runs, may one day be able to identify brain structures that are associated with various components of choice, and so neatly sidestep the problem of unobservability.

But this is much too much:

While Glimcher and his colleagues have uncovered tantalizing evidence, they have yet to find most of the fundamental brain structures. Maybe that is because such structures simply do not exist, and the whole utility-maximization theory is wrong, or at least in need of fundamental revision.

Utility-maximization theory is wrong. It is wrong by construction, because it is a model, and models are wrong by construction. Why must we go through this? It might be easier to sell me an iPod if you first convince me that CDs are useless, but that’s marketing. Why do we need to market neuroeconomics this way? That tiny little word “wrong” up there is a sin, because it belies the very essence of modeling as a means to make sense of things.

Is it perhaps that what we should understand by the quote is that looking into the brain will tell us that people are not, in some sense, maximizing their hedonic pleasure by the choices they make? This amounts to both an attempt to open the black box called “preferences” and to pin down the (biological?) process by which decisions are actually made. If this is the sense in which utility-maximization will be proved “wrong”, then in the first place it is not clear to me that neuroeconomics can accomplish such a thing. Leaving aside the tricky questions of intent, free will and consciousness, can there be anything inside the black box but another, and another? The leap from the correlates of physical choices in brain activity to the content or existence of a utility function is huge.
But more importantly, even taking literally the notion that we will be able somehow to trap preferences or process in a cage, it is surely impossible that any breadth or depth of evidence on what these preferences are could preclude the need for us to model. What if the thankless treadmill of refining imperfect models of the world could at last be switched off, that there is an apple of knowledge that will free us from the need for models ever again? This is a seductive idea, but it cannot be. To do away with models would be to be as complex as reality, and that is a fight that reality will win every time.