What is a ‘good’?

One very important concept in economics is ‘goods’. They’re one of the most fundamental building blocks of the tangible side of economics. If economics is about resource allocation, ‘goods’ are what the resources are being allocated towards. What is a ‘good’?

It’s not quite so simple as just a thing, like a TV or a sandwich; ‘goods’ are anything – any thing – that one can allocate resources towards that contains value for that allocator (rather confusingly, ‘services’ are ‘goods’ too, by this definition; so is something like ‘charitable contributions’). Yes, we’re going to have to plunge into the pool of abstraction. First of all, we have to think about time and space: is a hot coffee in the morning in winter in New England the same as a hot coffee in the afternoon at a swim-up bar in the summer in the Caribbean? Doubtful.

We have two broad strategies, as economists, for dealing with that problem. One might be to say, well, people have different preferences at different places, at different times. That’s difficult to work with because we have to try and hit a moving target, so to speak, and since we can’t know preferences for sure anyway we sure can’t know changes in them either. A second might be to say, those two coffees are different goods. That’s also difficult to work with because then it’s difficult to compare things at all.

In fact, in either case it will be very difficult to generalize. On the individual level, knowing how you allocated your resources in this situation at that time doesn’t help me describe what you did or predict what you’ll do in the future: if I see what you did at 11a.m. on Thursday when faced with the decision of coffee versus tea, how can that help me figure out what you might do at 10a.m. on Friday when faced with the same decision. These aren’t necessarily the same ‘goods’ at both times, and in any case, the difference might not just be the time or something else that I can measure; it might be your mood or whether you’re especially tired or just feel like a nice cup of tea for some mystical reason.

Seems trivial, but raise that to the level of the market for coffee, or the global coffee industry, or the impact of consumer decisions on the American economy, and the difficulty of defining a ‘good’ has snowballed into modeling chaos. For example, it’s impossible to properly think about the current climate in the market for oil without thinking of the market for oil in the future. Of course, in the real world, the financial world, it’s well understood that things vary in space and time: that’s why we have things like futures markets, which let you buy ‘thing X at time Y’ (which is just a special case of ‘good X now’, really). These things are considered separate (though connected) markets, with separate prices.

And that’s just how they were treated by economists as we developed microeconomics. The definition of a good was allowed to be very, very flexible and abstract, so that these ‘things’ don’t just vary in physicality but in time, space, functionality, and so on and so forth.

It’s not just time and space, though. An example: think of a college education. Is the good being sold by universities a ‘college education’? Is it a ‘degree from college X’? Is it a ‘college education of quality Y from college X’? The signaling model by Michael Spence wondered (and I obviously paraphrase wildly) if people would still pay for a college education if it the education was intrinsically worthless but had the value of ‘signaling’ that you were willing to give up four years to prove that you were great.

Should it be surprising that the cost of a college education has been rising despite there being a bigger supply of colleges? Maybe not, if our definition of the ‘good’ includes ‘quality’, perceived or actual; it’s easy to build a new college, but impossible to build a new college with a reputation to rival Oxbridge or the Ivy League. The supply of that good, whatever it is, is fairly well fixed. Econ 101 is obsessed with ‘supply and demand’ analysis; the fact is, supply and demand analysis takes you very, very far if you’re prepared to speculate properly on what a ‘good’ is.

This is all quite similar to the corporate strategy mantra of identifying your ‘core competencies’ and defining the industry. For example: railroads and train companies aren’t ‘the railroad industry’, but ‘the transport industry’, competing with airlines and buses, and maybe even ‘the food industry’ if they serve food, etc etc. That leads us dangerously close to the kind of ‘provider of transport services’ corporate-speak euphemisms that plague so many firms, but it’s pretty much the same question as how to define a good in economics.

Searching for a schism

Word reaches my desk this afternoon of an interesting-looking new book on the horizon, called “The Foundations of Positive and Normative Economics“, an essay collection edited by Andrew Caplin (of the monkey brains) and Andrew Schotter. Details are a bit sketchy, but the idea is just fine with me. I have a high tolerance for this kind of thing, and hopefully it lives up to my expectations.

On that note, I hope for something a bit different to the endorsement quotes on the book’s rather empty webpage:

“Are you puzzled by the implications of behavioral economics? Are we in the throes of a paradigm shift? Is neoclassical economics refuted? Economic methodology has never been more disputed. If you want to be part of the debate, this book is the place to start.”–Ken Binmore, University College London

I still don’t see this distinction between ‘behavioral economics’ and ‘neoclassical economics’, to be honest (see here, for example). Why is a different model of people an abandonment of neoclassical economics? ‘People maximize stuff’ is my minimalist description of neoclassical economics, and the behavioral set is just trying to figure out what the stuff is. Again (again, again), since it’s not possible to test rationality, the ‘maximize’ bit just has to float out there unattached.

I don’t really get this one either:

“Should economics take account of neuro-physiological data? Can subjective states of mind play a useful role in economic analysis? These and other provocative questions are examined and debated in this fascinating volume of essays from some of the deepest thinkers in contemporary economics.”–Eric Maskin, Nobel Laureate in Economics, Institute for Advanced Study

Maybe I’m behind the curve on this one, but I’m not sure what that really means. It gives the impression that this book might be predominantly concerned with the implications of psychological and neurological research, but to me all that is really something different from the epistemological question of what positive and normative economics are doing for us, where they came from and where they’re going. The state-of-the-art in economic theory or modeling is one thing, but I hope the book tackles the big questions rather than obsessing about the value of behavioral evidence.

I disagree fundamentally that “economic methodology has never been more disputed”, hence the futility of chipping away at the tiny and ultimately boring debates at the root of modern research. The assumptions, the beliefs can surely differ, but I think the approach is set on some fundamental level. Superficial differences in approach do not go down very far: yes, a ‘behavioral economist’ might be searching for realism by figuring out how people act, and an ’empirical economist’ is running regressions on cleverly constructed data, a ‘theorist’ is off in the land of abstraction and algebra, but all are operating on the same field of positive economic science. The real question is how we got to be that way, not why some economists do one thing and some another. That’s the question of the foundations of positive and normative economics.

Simplify, simplify

I might just go ahead and quote myself:

“One of the principles of writing economic theory is to create a simplified abstraction of reality.”

This is from an article by Russell Jacoby in the Chronicle of Higher Education:

“The world is complicated, but how did “complication” turn from an undeniable reality to a desirable goal? Shouldn’t scholarship seek to clarify, illuminate, or — egad! — simplify, not complicate? How did the act of complicating become a virtue?”

This is quite clearly not an article about economics (phew). It goes to show how very, very different we’ve become from the other social sciences and arts. Yesterday I was talking about the development lab at MIT; would they say, “Ah, there’s a million and one things that affect the quality of education. I’m going for a drink.”? Of course not. Economics seeks to expose in the simplest possible terms the relationships around us. Indeed, the world is complicated; that’s why the MIT lab has to perform randomized trials to isolate the effects of programs. It’s why theorists create little models of the world.

Contrast this with this characterization from Jacoby:

“The refashioning of “complicate” derives from many sources…. [acaemics] will prize efforts not only to complicate but also to “problematize,” “contextualize,” “relativize,” “particularize,” and “complexify.””

In economics we want to know: what you’re saying, why you’re right, and what could make you wrong. That’s about it. One of the most valuable consequences of treating economics as a science is that we parachuted out of this borderline nonsense:

“They will denounce anything that appears “binary.” They will see “multiplicities” everywhere. They will add “s” to everything: trope, regime, truth. They will sprinkle their conversations with words like “pluralistic,” “heterogenous,” “elastic,” and “hybridities.” A call for “coherence” will arrest the discussion. Isn’t that “reductionist”?”

This explains a big part of the schism between positive economics and other social sciences; we are OK with leaving some things out if it helps. When it comes to the policy debate and the normative questions, we have to throw all the other stuff back in, but “it depends” is a conclusion acceptable in positive economic research only if you can tell me exactly how and why it depends.

Jacoby has the neat sign-off:

“The cult of complication has led — to alter a phrase of Hegel’s — to a fog in which all cows are gray.”

In economics, our judgment cows are gray, but our scientific cows are black and white.

Perhaps we should stop using this phrase

I mentioned the old “dismal science” slight on economics the other day. It’s not exactly new news, but I like the story of the origins of the term a lot, because knowing it would surely cause people to think twice before using the phrase.

Here‘s a good article that tells the story of Thomas Carlyle’s first uses of the phrase.

“Carlyle attacked Mill, not for supporting Malthus’s predictions about the dire consequences of population growth, but for supporting the emancipation of slaves. It was this fact—that economics assumed that people were basically all the same, and thus all entitled to liberty—that led Carlyle to label economics “the dismal science.””

Now, economics is probably pretty low on the list of reasons to oppose slavery, but it seems that Carlyle was taking issue with John Stuart Mill (among others) for arguing that since people are basically the same, there’s no such thing as a “natural” hierarchy of people. Carlyle’s position, sadly, speaks for itself:

“Carlyle disagreed with the conclusion that slavery was wrong because he disagreed with the assumption that under the skin, people are all the same. He argued that blacks were subhumans (“two-legged cattle”), who needed the tutelage of whites wielding the “beneficent whip” if they were to contribute to the good of society.”

Aside from its connotations – which are about as politically incorrect as it’s possible to be these days, and would certainly not be allowed to be printed in any of the places where we see the phrase “dismal science” – the target Carlyle was directing his argument towards is not much like the method of economic science at all. In fact, he seems to really be taking issue not with the practice of scientific, positive economics but with the assumptions the economists made about people. From another article on the same subject:

“In short, Carlyle was of the view that compulsion, rather than market forces should regulate the supply of labour on plantations in the West Indies because the laws of supply and demand are not appropriately applied to the relationship between White and Black as they are contrary to “their mutual duties” (white = master and black = servant) as ordained by “the Maker of them both”. In Carlyle’s opinion: “declaring that Negro and White are unrelated, loose from one another, on a footing of perfect equality, and subject to no law but that of supply and demand according to the Dismal Science”, “is clearly no solution” to the problem.”

Oddly enough, and though it’s probably ridiculous to compare them, Carlyle is attacking exactly the same assumption that is still criticized today: the assumption on the motivation of people in economic models. Certainly the reasoning of the critic of today is significantly less outrageous than Carlyle’s, but they’re shooting at the same target.

Carlyle certainly seems to demand a different kind of response than today’s defense of the modeling of people – “it’s just an abstraction, we know we’re not being realistic”. Luckily, as some of Mill’s angry and eloquent responses indicate, Carlyle’s normative beliefs were vigorously challenged right from the start. His assumption was, I hope we can agree, unrealistic. If he had performed a positive economic analysis based on his assumption, it would have been badly wrong and inaccurate.

No normative belief or opinion can ever be “wrong”, but an assumption can certainly be wrong. Which assumption would lead to better economic science: Carlyle’s assumption of natural servitude or Mill’s assumption of natural equality? If Carlyle had argued that slavery was a good thing, plenty of people would have disagreed with his opinion. When he argued that people are inequal and thus servitude is a better use of people than freedom, he didn’t just have an objectionable opinion, he had bad science.

Modern economics has fought hard to work the number of abstractions made on the motivation of people down to just one: rationality. We don’t restrict what people care about, we just require there to be some method to the madness. Economics should be value-free, boring, scientific, clinical, and, yes, dismal, but I’d think twice before I called it the “dismal science”.


As someone who laments misperceptions of what economists are and do, the barriers to communication with anti-capitalist groups make me very sad indeed. How did I get there? I was looking for something entirely different when I stopped to read an article by Roy Weintraub talking about neoclassical economics. To someone with my beliefs in what economics is, it’s a bit schizophrenic. This is nice:

“Neoclassical economics is what is called a metatheory. That is, it is a set of implicit rules or understandings for constructing satisfactory economic theories. It is a scientific research program that generates economic theories.”

This is pretty good news: the beast called “neoclassical economics” is merely a box inside which we concoct scientific theories: inside our box, this would lead to that. Weintraub continues to say that the assumptions of neoclassical economics

“include the following:

1. People have rational preferences among outcomes. 2. Individuals maximize utility and firms maximize profits. 3. People act independently on the basis of full and relevant information.”

Of these, 1 is redundant to me because I think rationality is not testable and is therefore irrelevant, especially since it’s probably implied by 2, and 3 is at best outdated (economists these days are very interested in the implications of imperfect or asymmetric information). If I was pressed to define neoclassical economics, I think perhaps the definition I would use is similar to 2. I’d say that neoclassical economics is the branch of economics that models entities (individuals, firms, governments, etc) as if they try to get the outcome they like best from the ones that are available.

I disagree more with the stance of the article when Weibtraub repeatedly invokes “the neoclassical vision”. The connotations of this phrase probably reinforce the misconception that economists think the box in which neoclassical economics works obeys the same rules as the real world. I doubt a physicist thinks that a vacuum is the same as the real world, just as I doubt that any economist thinks that the abstractions of economic modeling are the same as the real world.

It’s true that a positive economist who seeks to explore “what is” should not neglect to examine the differences between abstraction and reality, but again we must ask at what point the value of realism is eroded by its inability to draw any conclusions. I think the real choice we’re faced with is the application of the economic method that says “if this unrealistic simplification, then that” versus a shrug of the shoulders; if it were possible to achieve the ideal “if this, then that”, who would reject it? Should we stop trying because we can’t be perfect?

Perhaps partly because of such confusions, “neoclassical economics”, aside from having a silly name, seems to have become something of a lightning rod for the anti-capitalist set as much as it is for economists with different ideas. Google neoclassical economics and you get – on page one – a page from adbusters (an anti-consumerist publication – Wikipedia entry), and a less histrionic “critique of neoclassical economics” by Herb Thompson.

“Neoclassical economists normally treat economic instability as the effect of exogenous, stochastic factors even though nonlinear economics suggests that what may previously have been considered exogenous, or random, may more likely be endogenous to capitalist social formations.”

I confess I’m not sure what “nonlinear economics” means (the almighty Google was inconclusive): clearly I, too, have been indoctrinated to the neoclassical cabal. However, I actually think that the quotation touches on an interesting idea. Can we figure out if the primacy of money as a measurement of outcomes “caused” the rise of the capitalist method of organizing resources, or if the capitalist method “caused” the rise of the primacy of money?

A difficult one. For example, to take a typical example of an anti-capitalist complaint, do people buy sweatshop goods because they don’t know they’re sweatshop goods or because they care more about cheap goods than where they came from? I think the latter is more consistent with “money primacy leads to capitalism” and the former is more consistent with “capitalism leads to money primacy”, although I’m sure that could be debated.

It is possible to imagine that incorrect normatization of positive economics – by which I mean the mistaken assumption that some measurable positive economic variable is a measure of the quality of an outcome – actually causes problems within the economic system. People will do what they will, but if a policymaker chooses a policy based on the primacy of money as a measure of the quality of an outcome, there’s a real possibility that the system itself is influenced by its measurement.

The Thompson article also includes the following excellent paragraph:

“The ‘rational’ consumer of the mainstream economist is a working assumption that was meant to free economists from dependence on psychology…. The dilemma is that the assumption of rationality as intertemporally optimising is often confused with, and regularly presented as, real, purposive behaviour. In fact, the living consumer in historical time routinely makes decisions in undefined contexts. They muddle through, they adapt, they copy, they try what worked in the past, they gamble, they take uncalculated risks, they engage in costly altruistic activities, and regularly make unpredictable, even unexplainable, decisions.”

First of all, this is crucially wrong: “rationality” is not something that can ever be more than an assumption, unless you think you can test it. Further, assuming rationality does not exclude any of the motivations Thompson talks about. It would be trivial to write down a model of a rational person who “engaged in costly altruistic activities” – I simply have the person care about others and optimize rationally. The assumption that Thompson is really discussing here is the straw man of “rationality equals maximizes money”, which I have previously argued is absolutely not an assumption of any economic theory, neoclassical or otherwise.

Beyond that, this is really back to the same problem that the Weibtraub article was getting at: we’re doing the “if this unrealistic simplification, then that”. There’s a strong push in so-called “behavioral economics” to figure out if there’s a workable way to first make realistic generalizations on how people behave and second to incorporate them into the unrealistic simplification of neoclassical economics. While that goes on, the economist who seeks to defend his method must be clear on what his unrealistic simplification actually is and what it is used for.

As usual, no-one is fit to judge if the anti-capitalist model is “better” than the capitalist status quo, but I greatly hope that we would be able to talk about what each would mean. If somehow I were able to convince adbusters to sit down with me and I asked them what they wanted to do and what they wanted to achieve, what might they reply? I don’t know what they would say, but whatever their answer, I would like to figure out what it would take to achieve their goals, what the consequences of their chosen actions would be, what it would mean for people, not just them or me. I hope they would like to figure that out too. That’s positive economics.

Too complicated?

One of the principles of writing economic theory is to create a simplified abstraction of reality. If the theory convincingly isolates an idea, it cannot be too simple; hopefully, the narrower the question, the simpler the theory can be written.

Economists therefore appeal to the “all else equal” assumption a lot. The oft-perceived superiority complex of economists is traceable to our willingness to use the “all else equal” clause to make our questions answerable, theoretically and empirically. If we want to write relevant economic models that investigate the link between A and B, we hold C equal; whether or not C would really be equal or relevant in reality, we can’t isolate the effect we’re interested in if we don’t figure out a way stop it from contaminating the abstraction.

It’s the same principle that underlies the ideal of “controlled experiments” in all science; empirically, if we want to figure out how A and B are related, I need to be careful to avoid finding an effect because a third factor C is involved. For example, there’s an important difference between “people who exercise more have a longer lifespan” and “people who exercise more also eat well, and people who eat well have a longer lifespan”. That’s well understood in statistics and empirics generally; there’s no reason why the same principle is not also needed when we use the theoretical standard of proof rather than the empirical standard of proof.

Why, then, is “economic theory” so amazingly bewildering? With very little exaggeration, we can claim that no great development in the science of economics has used very complicated techniques, even when math was involved, yet even to the technically competent a lot of economics research is very difficult to understand. Of course, if an economist could all find ground-breaking theory that can be represented in two lines, I’m sure she’d write it. Is the reason for the complexity an attempt to make average ideas look better?

Let’s be charitable and assume that’s not the case. I think that once we exclude the “obfuscation motive”, there are two possible reasons why economic theory is technically complex. One might be that the relationships being investigated are broader, that less is held equal, that we’re looking to more nuanced explanations. Another possible reason is, paradoxically, that theory gets more complex as the questions get narrower – the more we assume, the higher the complexity.

Why? Imagine I want to figure out the relationship between a person’s income and the number of hours that person does voluntary work. This is a question that asks about how people allocate a scarce resource, time. I might make an abstraction that says “if all people like both money and helping others, then people with higher incomes will spend more time helping others, while people with lower incomes will spend more time trying to earn extra money.” I might make an abstraction that says “people with more income work more so have less time to volunteer”. What assumptions lead to the first conclusion, and what to the second?

If I wanted to broaden my question, I might start including in my theory labor market conditions, the availability of volunteering opportunities, the peer pressure to volunteer, the social pressure to earn more money to buy a big car, and so on and so forth. That would certainly make my theory more complicated; whether or not it makes it a better theory than the one that kept all that stuff equal and abstracted from it is a matter of preference, but I’m sure it would be more difficult to understand.

The second way to make the theory more “complicated”, at least superficially, might be to keep all the same stuff equal, but to say “imagine the person cares this much about money and this much about volunteering; then someone with this income will volunteer this much”. The abstraction is getting more abstract; we are getting more and more specific about the conditions of our model, and we must use more specific techniques to, in particular, quantify the result.

What do we gain from this quantification, and what do we lose? Perhaps we can look at actual evidence on the link between income and volunteering, and compare it to the quantified prediction, but that only works if all else is equal in our evidence, too. A better justification is that we can get a theoretical idea of how big our effect is. However, as we get more specific we get more abstract; in this example, we’re getting more abstract about preferences, which are themselves unobservable. We’ve gone from “a person cares about money and volunteering” to attaching magnitudes to those cares.

The link between simplicity and usefulness is not just in the realism of the abstraction; it’s also in the procedure itself. Economic theory should be neither too broad or too narrow, but “just right”, whatever that means. Assume too little and we can’t figure out what’s really causing what; assume too much and you rest an entire argument on a special case. What’s the simplest model that explores the relationship I care about, and what’s the simplest model that shows what I want to show about that relationship?

Oh, and a practical suggestion: I’d love it if we all stopped writing ceteris paribus and used “all else equal”. What’s with the Latin?