Students, as well as the public, often raise questions about the scientific nature of economics. Indeed, while economics uses very sophisticated mathematical models, their predictive success leaves much to be desired. Yet, economists feel that they learn a lot from these models. It is argued that part of economic theorizing does not follow the Popperian view of science; rather, some of the knowledge that is generated is analogical. According to this view, research in economics attempts to serve rhetorical purposes. As such, analogies can be useful, alongside general rules. Moreover, the role of axiomatic decision theory is understood as serving to clarify arguments in the context of public debates.
Economics and its limits
Since 2008 economics has been under a particularly severe attack. Questions about its scientific nature have been raised, some of them old, some others new. One line of attack is the ideological bias. Post-modern thinkers argue that no science can be perfectly objective, let alone a so-called social science. A scientist who describes reality by a theory implicitly supports certain views of the world as opposed to others. The very language one uses shapes one’s thought. And, it is argued, it is inevitable that one’s upbringing and social network, one’s interests and alliances would affect one’s view of the world. On this background, the argument goes, economists pretend to be objective while they aren’t.
This is surely a valid point. Even if one tries as hard as one can to be objective and honest, one can’t help being biased in the way one sees things. Further, when a person expects benefits from a given social system they won’t be eager to question the presumably-scientific basis that supports it. However, it is important that we not confound here the descriptive with the normative, that is, a description of reality with a description of our goals. For example, one may think that hunger, wars, and diseases are unlikely to be eradicated, without embracing them as desirable phenomena. Similarly, the fact that one can never be fully objective does not mean that one should not try.
Another argument against economics has to do with its predictive success: the field often fails to provide accurate numerical predictions. Despite the sophistication of the mathematical tools employed by economics, it is not viewed as an “exact” science, and people often wonder, is all this mathematical analysis really needed? Why do economists worry about the rigor of the analysis if their predictions are so poor?
Indeed, this point is valid, too. However, one should point out that, in general, the sciences are much better in predicting phenomena that can be isolated, and repeatedly observed, ideally also experimentally, than in predicting global phenomena, which cannot be isolated for theoretical analysis or for experimental studies. For example, physics, as a science, is considered to have understood that basic equations that govern meteorology, as well as seismology. Yet, we cannot predict earthquakes, nor can we predict the weather more than a few days in advance. The reason is that the system under scrutiny is way too complex to predict or simulate, and that the data we cannot measure at present may have very large effects in the near future, so that prediction is seriously hindered.
As compared to meteorology and seismology, economics has two additional problems: first, the discipline has not figured out the basic forces that are at work. That is, we don’t have the equivalents of the flow equations. Second, we are dealing with a system that responds to theory endogenously. For example, if we have a very accurate theory that could predict the behavior of financial markets, and it were to predict a stock market crash in two days, it would be self-refuting: people, being aware of the theory, would immediately respond and the crash would probably come a day earlier. Thus, it is conceptually simpler to predict a system that does not respond to its own predictions (such as the weather) than a system than does (such as the stock market).
Having said all this, it should also be mentioned that in economics, too, predictions are better when one can isolate a sub-system, and analyze it using theory, simulations, and experiments.
Yet another source of concern about economics as a science is the fact that its assumptions are considered to be all false. Indeed, practically all the assumptions that economics makes about individual behavior have been the subject of attacks by carefully-designed psychological experiments, with the Nobel-winning project of Daniel Kahneman and Amos Tversky being a major contribution. People often ask, therefore, why does economics make wrong assumptions? And is there any reason to wonder that the predictions are false when so as the assumptions?
It is this question I would like to discuss here. What can be learned from assumptions that are false? What is the role of models in economics, and can they be useful even though they are based on such false assumptions? Relatedly, is economics a predictive science? To try to say something about this ambitious question, we have to start with a more fundamental, and even more ambitious one: how do people reason? This is the heart of a working paper that Andrew Postlewaite, Larry Samuelson, David Schmeidler, and I completed recently (“Economic Models as Analogies”). To try to say something about such ambitious questions, we have to start with a more fundamental, and even more ambitious one: how do people reason?
How do people reason?
There are two main modes of reasoning, which are well documented in psychological literature as well as in common literature. The first is reasoning by analogies. Aristotle and David Hume have already pointed out the role of analogies, and in modern literature it is called “case-based reasoning.” This model of reasoning is quite simple: it suggests that, if one case is similar to another, their outcomes will also be similar. Reasoning by similarity is a very basic type of reasoning, one that was used for several millennia, well before it was formalized. The second mode of reasoning is theoretical, or “rule-based”: it attempts to use data for inductive inference about general rules, or theories, and then use these theories for generating predictions in new cases. The deductive part of this process has already been formalized by the ancient Greeks. This type of reasoning is considered to be the standard way of performing scientific enquiries: scientists are supposed to come up with general rules, test them, refine them upon refutations, and thus proceed to produce increasingly accurate theories and better predictions.
Psychology recognizes both of these types of reasoning. Researchers have been trying to understand how people use each of these modes of reasoning, when do they use them, what type of mistakes people are prone to make when using either, etc.
Completely independently of psychology, a similar trend can be observed in the field of statistics. This area of enquiry is not about understanding how people think; rather, it is about the “right” way to derive conclusions from data, or how we should be thinking. Yet, both of these types of reasoning exist there as well.
With the standard statistics that we study at school, and, in particular, with parametric statistical inference, we assume the existence of a general rule, or a distribution that governs the process we observe. The question is, which distribution is it? The nature of the statistical inference problem is to guess this distribution given the data that were actually observed. The knowledge, or educated guess, of this distribution would help us make predictions about future observations. However, sometimes statisticians will not try to guess the general rule; rather, they would use past observations to make predictions without the act of identifying the rule, or the distribution. This is more closely related to non-parametric statistics. Techniques such as kernel classification, or nearest-neighbor methods are such examples, used in machine learning and computer science and in “modern statistics”. Basically, these methods are based on analogies. For instance the nearest-neighbor approach takes the past cases that are most similar to the one at hand, and tries to predict the outcome of the present case based on the outcomes of past ones. In a sense, this is a much less presumptuous approach, from an epistemological viewpoint, than the classical statistical inference approach: the scientists who use it do not claim to know the rule governing the process or to understand perfectly the data generating process. They remain agnostic about the general rule, or distribution, and modestly try only to predict the outcome of the case at hand.
Notice that both on the psychological side, trying to document how people actually think, and in statistics, trying to determine what’s the right way to learn from data, we find both analogies and theories. Thus it stands to reason that both approaches exist in scientific reasoning as well. The standard model of scientific reasoning, according to the logical positivists and to Popper, is typically viewed as logical and rule-based: one tries to elaborate theories, to test them, and if they fail, one attempts to adjust them. But one can also think of science in a way that is case-based. It is a sociological fact that the main view of science is rule-based, using inductive and then deductive reasoning. Yet, some scientific reasoning can also be viewed as case-based. Our claim is that much of modern economic theory is of this type.
Akerlof’s Lemons or the power of analogies
Let’s take an example from an article that is very famous among economists, “The Market for Lemons: Quality Uncertainty and the Market Mechanism,” published in 1970 by George Akerlof.
A “lemon”, in American slang, is a car that is found to be defective after it has been bought. The point of the lemons example is that in the used car market, a seller knows whether her car is a lemon or not, whereas the buyer does not. More generally, the article deals with any trade situation in which the seller knows more about the product than does the buyer. Akerlof was trying to understand the problem of quality uncertainty and its effects on the markets, and discussed the market for used cars as an example, or even a parable. He concluded that in this market, owners of good cars will not offer their cars for sale, because the price that the buyers are willing to pay for them is too low. The reason is that the buyers, not knowing the true quality of the good, are only willing to pay for an averaged quality good. This is fine with the low-quality sellers, but not with the high-quality ones. Thus, these sellers stay out of the market, and the average quality of the goods offered further decreases. At equilibrium, only lemons are sold: the bad items drive out the good ones.
This is a very simple example, and the legend says that the paper had been rejected four times before it was published, because it was too simple. Indeed, to understand Akerlof’s reasoning, there is no need to look at empirical data, no need to follow complicated mathematical proofs, no need to run experiments: one hears the story and one immediately sees the point. The story is so simple, so obvious in hindsight, that people not only understand it, they can immediately see other examples of similar problems. No wonder that Akerlof was awarded the Nobel Prize for his contribution to understanding information asymmetry: with this very simple article he has radically changed the way we think about markets.
But what is this model about exactly? The classical view of science would hold that Akerlof has told us something about used car markets, and to confirm his theory one has to test it and see if it is refuted. But, with all due respect to the used car market, this is not what Akerlof received the Nobel Prize for. And, in fact, if it so happens that in that market Akerlof’s model does not predict outcomes too well, this would not change much in the way economists view this model. To capture the way economic theorists think about this model, the lemons story can be taken as an illustration, a metaphor, or a parable. These are different ways to describe the fact that one learns so much from this story, and not necessarily about the used cars, which the story deals with on the surface. We would suggest calling such a story a “theoretical case”.
Many economic models from the last three decades, especially in microeconomics, can be interpreted this way. It’s a specific way of thinking about problems without necessarily thinking of where they apply. If we wanted to make proper science à la Popper, we should be able to say when and where each model applies, what makes it similar to what, how to judge this similarity, and so forth. This is typically lacking in these models. So one could say that this is not science; this is maybe pre-science. Be that as it may, it is a method of reasoning that economists find very powerful.
Importantly, when we engage in this case-based approach to science, and do not specify the algorithm for similarity judgments, we leave quite a bit of the reasoning process to the listener’s intelligence and intuition. When I’m in the classroom and I tell my students about the Akerlof’s lemons, I don’t tell them precisely when markets are going to fail, or which markets are similar to the used car markets. What I’m telling them is: “Here is an important thing to have in mind; when you deal with real life problems, keep this theoretical case in your mind, and wonder if the case you are looking at is more like the lemons case or more like the efficient markets we were talking of last semester. And I leave it to you, to your intelligence, to decide on the spot whether you should apply one model or the other.”
In this “science”, or “pre-science”, if you will, we are not providing refutable predictions, because we are not committed to a particular similarity function and a particular way of using similarity to draw inference from a database of cases to generate new predictions. We only provide the listeners – say, our students – with things to look at and analogies to think of.
Models of nothing
In our article, my colleagues and I try to draw a couple of implications from this way of viewing economists’ reasoning. Thus, we take a sociology of science perspective, trying to see which special features of economics as an academic discipline might be related to its application of case-based reasoning to scientific predictions. For example, economists seem to be relatively unperturbed when their models are refuted, while they seem to think that many possible set-ups are “examples” of their models. Scientists in other disciplines are very much troubled by refutations of their theories, and they correspondingly attempt to delineate the scope of their theories so that such refutations are less likely. This can be explained by the case-based vs. rule-based view of a “model”: if, as in the standard approach, a model is supposed to be a general rule, it is rather bad news to know that it has been refuted, and the chances of this happening would be smaller if the model is restricted. If, by contrast, a model is a theoretical case that may shed some light on actual cases, the model cannot be refuted by these cases. Correspondingly, one risks little by saying that yet another case resembles one’s model.
Similarly, this view of economic reasoning can explain why economists feel that they learn a lot from models that some consider to be “models of nothing”. Models that are too abstract and too idealized to be even a remotely accurate description of any concrete reality may not count for much in most areas of science. Yet, economists are happy to produce such models, and they honestly feel that they understand the world better thanks to them. The reason is that economists feel that, thanks to such models, they are better equipped to understand the next problem that is going to show up. Such models enrich one’s tool kit, even if they do not produce predictions in an algorithmic way.
As mentioned above, one of the claims leveled against economics is that its assumptions are all false. Why do economists keep using models that are based on such false assumptions? Our case-based reasoning model might explain that: viewing models as theoretical cases, which can never be refuted by actual cases, economists seem to be quite comfortable with the fact that all their theories are wrong. A model that is wrong can help making predictions, just like a parable that is obviously not a true story can help understand reality.
Consider the following example. It is a common assumption in economics that people prefer to have more money to less money, and that they are motivated by their own self-interest, so that each agent cares about how much money she has, irrespective of the others. By contrast, there are the “ultimatum” game experiments, where one player has to decide how to split a sum of money between herself and another player, and, with no monetary motive whatsoever, players often choose to give some of the money they received to the other player.
Suppose that we find that, in the experiment, the player who has the choice gives the other 20% of her money, on average. What should we make of it? Should we conclude that the selfish, materialistic model is refuted and we should discard it and any conclusions that were based on it? This, indeed, would be the view of the rule-based model of scientific reasoning. However, we claim that the way economists react to such a “refutation” of their model is quite different. The model, despite the refutation, is still there, as a theoretical case, serving as a reasoning aid. Similarly, so does the experiment: it is a concrete case, in which the outcome observed differed from the one of the theoretical case. Next comes a concrete prediction problem, say, how much of their income will people donate for helping the poor in their society. The theoretical case suggests that they will give nothing, while the experimental one – that they would give 20% of their income (voluntarily). It now behooves the economist, making the prediction, to ask himself or herself, to which of the two “known” cases is the new one more similar: to the theoretical or to the experimental one? An economist who still uses the theoretical one, despite its experimental “refutation”, might be viewed as saying, “to the best of my judgment, the real problem I’m interested in is more similar to the model than to the experiment”. Whether this approach is justified or not is beyond our scope here. Our point is that the case-based reasoning model can better explain economists’ reaction of experimental “refutations” than can the rule-based, Popperian one.
Finally, one can understand why economists, as well as other social scientists, deeply cherish the common language of their paradigm. Because, according to our view, much of the scientific activity involves the judgment of similarities in order to find the “right” analogies, it is important to facilitate the identification of points of similarity among different cases. The common language serves this purpose. By embedding each model in the framework of players and strategies, outcomes and utilities, it is much easier to find the similarity between models than it would have been without the underlying language.
The importance of rhetoric
This view of “scientific” activity as an exploration of “theoretical cases” can also explain the role of rhetoric. And I do not use this word in a negative sense. I use it to refer to the art of convincing each other – not just winning a debate or making one’s opponent look foolish, but truly convincing them, so that they walk home with a different view of the problem than the one they started with. This type of persuasion would have no role in the Popperian view of science: one states theories, and they are as convincing as they are unrefuted. There is no real room for debate or rhetoric in this view of science. However, if we take the case-based view, one can use many cases, and the way they are aggregated to generate predictions is not part of the model. It is left for the reasoner’s judgment and intuition. Thus, it can be important for one person to convince another that the problem at hand is more similar to a given past problem than to another.
When we’re dealing with rhetoric, we find that axioms can be very important. Consider, for example, the First Welfare Theorem, stating that under certain – actually many – conditions, free markets are a great way to allocate resources, as they do not leave any room for Pareto improvements, namely increasing the utility of some without lowering that of any other. Yet, who are the economic agents to which such a story can apply? Who actually maximizes a utility function? Do you, or does anyone you know, take derivatives of utility functions in order to maximize them? The relevance of the First Welfare Theorem thus looks rather dubious: it discusses non-existent creatures, and thus says little about real life, or about real economic debates that our society struggles with.
However, suppose that, instead of assuming that most people do, in most cases, maximize utility functions, we only say that most people, most of the time, make decisions, and that they do so in a transitive way, that is, that if they prefer a to b and b to c, they will also prefer a to c. These assumptions are much more palatable, much easier to accept. But then comes a mathematical theorem that says that the two sets of assumptions are actually equivalent: people who make decisions, and do that in a transitive way, behave as if they maximized a utility function. Suddenly, the First Welfare Theorem appears more relevant than before. It surely has many limitations, but it does make more sense than before: it says something more relevant about the world. And all that was changed is that the same assumption was described, or framed differently. Thus, a mathematical theorem, showing the equivalence of two representations, can be a powerful rhetorical device: assumptions that are not too compelling in one guise may prove much more powerful in another.
The role of mathematics and logic in rhetoric has long been recognized. In fact, it has been argued that logic was developed by the ancient Greeks as part of their culture of debate. Clearly, logical arguments and mathematical proofs are “good rhetoric” in that they are not about winning a particular argument, or making the opponent appear silly; rather, they are about truly changing the listeners’ views, in such a way that the latter can walk away with new insights, and they might be sufficiently convinced to use the same argument in their future discussions with others.
Within economics, the rhetorical role of axioms (such as transitivity in the example above), is widely accepted in a normative interpretation. In this interpretation, a model is not supposed to describe reality, but to convince people that this is what they would like reality to look like. Thus, a normative interpretation of a theory has to do with convincing other people, and, as such, the rhetorical role of axioms is evident in this interpretation. However, we hold that axioms might be important rhetorical devices also for descriptive theories, that is, theories that are interpreted as a depiction of reality. How can that be the case? Isn’t the accuracy of a theory, as a description of reality, independent of rhetorical devices?
Indeed, according to the classical, Popperian view of science, there is no room for rhetoric: if you really believe in your model, you just test it. Any two equivalent descriptions of a theory should, by definition, yield the same accuracy of fitting data. However, if we take our view of case-based models, it turns out that rhetoric might play an important role: the models are not supposed to describe reality, but only to highlight certain aspects of it, explaining to others in our society why we believe that a particular problem is similar to one known problem or another. According to this view, models, even when interpreted descriptively, are only tools for convincing others, and thus different representations thereof may be useful, as they may be convincing to varying degrees. In particular, a set of axioms that is shown to be equivalent to a given model may render this model more convincing, and apparently more relevant, than it would be without the axioms.
And this is what I view as the main contribution of the theoretical work that Prof. Massimo Marinacci and others are doing: studying the axiomatic foundations of different models help us to judge, which is more relevant for the analysis of concrete problems. If economics were a more successful descriptive science, such as physics, one could simply test the accuracy of its general rules, and different representations of the same rules, or theories, would have but limited impact. However, precisely because economics is not so successful, and because it is often a case-based “pre-science” rather than a rule-based science, understanding the foundations of our models is of paramount importance. Where the main test of models is our own intuition and similarity judgment, rather than hard evidence, foundations are key.
This article is based on a speech at an academic event organized in June 2012 by AXA-Bocconi Chair in Risk at Bocconi University, on risks in Economic and Social Sciences.
Making Better Decisions: Decision Theory in PracticeItzhak Gilboa
List Price: EUR 27,91
The Gentle Art of Firm PersuasionSimon J. Naudi
List Price: EUR 66,42
A Theory of Case-Based DecisionsItzhak Gilboa
List Price: EUR 43,86
Closing the Power Gap Between Asic & Custom: Tools And Techniques for Low Power DesignDavid Chinnery
List Price: EUR 157,17
Theory of Decision under UncertaintyItzhak Gilboa
List Price: EUR 27,90
Rational ChoiceItzhak Gilboa
List Price: EUR 18,55
The Logic of Scientific DiscoveryKarl Popper
List Price: EUR 21,26
[LOGIC OF SCIENTIFIC DISCOVERY] by (Author)Popper, Sir Karl R. on Feb-21-02Sir Karl R. Popper
- The Market for Lemons: Quality Uncertainty and the Market Mechanism (George Akerlof, Quarterly Journal of Economics, 1970)
- Economic Models as Analogies (Itzhak Gilboa, Andrew Postlewaite, Larry Samuelson, David Schmeidler, PIER Working Paper Archive 12-001, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania, 2011)
More on paristech review
On the topic
- Understanding the financial brain: the goal of neuroeconomicsBy Sacha Bourgeois-Gironde on May 17th, 2010
- Using the future to navigate through an uncertain worldBy Robert Branche on January 21st, 2011
- Mistakes Politicians MakeBy David E. Lewis on January 25th, 2012
- Of speculators and their demons, and being reasonableBy Eric Brian on January 2nd, 2012
By the author
- Rhetoric in Economicson December 3rd, 2012