Dani Rodrik has been arguing that the mistake many economists, and non-economists, make is to look for “one right model” – when in truth economics is a form of craft, where you have a multitude of models and need to know what is appropriate in different circumstances (ht Economist’s View).
To anyone who has studied microeconomics, or applied microeconomics, to any level this wouldn’t be surprising. Furthermore, it would be seen as the common view of many economists. This may seem incredibly weird to non-economists – especially since many economists and non-economists share the view that there is an ‘objective reality’, and therefore this single reality seems like it should be described by one ‘super model’. But let me explain.
Let us start with the points that we cannot intuitively know/reason everything about a social system, and that data provides an imperfect lens on any ‘objective reality’ that may exist. In fact the constraint on data can be viewed even more harshly if we include the recognition that often data can only be used when we impose a priori theoretical structure in the first place!
Now this is fine, as when it comes to answering a question we can use the scientific method to give us a bit of a hand. The hypothetico-deductive method is the way we head in this case.
However, things are never quite that neat. The questions economists ask are often ‘ceteris paribus’ questions (if this one variable changes, what is the marginal impact on another variable holding all others unchanged). Because we do not have the observations to measure and test a ‘complete’ model (one we can indicate in theory) we suffer from the Duhem-Quine issue – as a result, we can always blame failure on an auxiliary assumption instead of the one we are testing! Add to this that the conjectures we make are often probabilistic, and actual rejection is incredibly difficult!
This forces economists to go back into theory and think about “using reason”. I strongly remember reading Mill talk about this, and ‘deducing outcomes through hypothetical situations in your mind’ before moving to induction for the real world, but I for the life of me have not been able to find the quote
In this context, a theoretical model is incredibly useful for helping us to tie down assumptions about a counterfactual world, simulate what would happen in that world, measure the assumptions and outcomes against data, and then update our beliefs about the relationship between X and Y. This is, at least how I interpret, the credible worlds view of economic modelling (and it can be seen in some sense as “Bayesian”). We discussed this here with links to papers.
The importance of assumptions
So we have simulations/models, which are sets of assumptions, some of which are accepted and some of which are not but are used for simplification purposes. Cool.
We can then ask “how do these assumptions influence the question we are asking”. In some cases the simplifying assumptions are irrelevant for our SPECIFIC QUESTION in that simulation – that is cool, we can just roll forward and ask how our simulation compares to data (which is the closest thing we have to a lens on the objective reality we are chasing around).
If the simplifying assumptions do have an impact, we can try to simulate without those assumptions. If this is not possible to link to data (as the simplified assumptions are not measurable) we are doing this just so we know where the bias in our description is, so we can include that in our answer to the question!
Whether an assumption is defendable, reasonable, and/or accepted, will depend strongly on the question being asked. In this case, we need to make different sets of assumptions to answer different questions. We need to create different models, to simulate different elements, in order to answer different questions! On top of this, we can ask how vulnerable our answer is to slight changes in the assumption – this gives us a way to infer whether the result is robust in the real world!
Furthermore, even for a single question using many models can be of use if they enlighten different elements – this is a way of extending the Gibbard and Varian view of models as caricatures for the fact a given question has a multiplicity of smaller questions embedded in it. Essentially, we model the elements/questions where there may be a debate, and we don’t model the elements/questions that we/our audience already agrees upon.
If we an ‘intuit’ everything (a priori knowledge) then we would only need one model. If data was a perfect representation of objective reality (both in scope and quality) then we would only need one model.
We don’t have these things. As a result, we need to use a mix of simulation/modelling (given a set of assumptions) and estimation in order to answer specific questions. A recognition that this is the process we have to use is the default among academic and policy economists already – but perhaps this is one of those ‘limits of knowledge’ issues that isn’t very clearly communicated at large
Note 1: This is largely incompatible with Friedman’s instrumentalism – as I am stating that our view on the usefulness of models for answering a question relies in a large part on assumptions, when in his view it is the predictive power of the model that matters.
Now, if our question is solely predictive, not used for policy, and many ‘causal’ factors are immeasurable then instrumental models are legitimate in a different sense to the one I have described – and the ‘value’ of a model is separate from the realism of assumptions. But this is due to the weak power of our tacit assumptions, and the opportunity cost of time associated with trying to do something that would add more value – in the vast majority of circumstances this is not the case.
Note 2: I have been cheeky in this post, and have acted as if models=simulation. As discussed here, a more common view is that it is models vs simulation [quick note here, the authors inference that economists are trying to find “universal laws” is a common misconception of what economists do – one this entire post is ruling out as a starting point ]
My personal view, stemming from the idea of credible worlds, is that all simulations are models, but not all models are simulations (as models can have other purposes – such as to communicate ideas). Not only is this unpopular, it is probably a touch imprecise (especially if we try to break down the purpose of models/simulations a bit further).
However, I think my distinction bears a closer resemblance to what economists are doing and why at a broad level – it is the way we create knowledge. Economies are not just complex adaptive systems in the sense they are currently modelled – they do have significant forward looking behaviour as well, which is currently difficult to incorporate in agent based models.
This also leaves the elephant in the room, social welfare. All model types, especially the more complex, face a gap between the simulatable ‘is’ and the relevant unobervabe ‘ought’ of policy making. There is a highly nonlinear, unobserved, functional relationship between the two we must assume – and this is a large part of economists caution regarding the use of ABMs!
The dream is that “recursive representative agent” models will become more heterogeneous, while agent based models will allow for more complicated behavioural rules that include forward looking expectations – hence moving towards each other. As long as they remain so different, using both to help understand what is going on is useful.
Let me give an example. An agent based approach may give us a great way to describe the history dependent process of growth along the single path we have experienced – but if we were to change policy, or to aim to forecast the future, these approaches act as a “black box”. We cannot infer actual causal effects. In this sense we would like to use “other models” which specialise in this – models that represent behaviour etc etc. Thinking “a modelling form” can dominate how we answer questions of the allocation of scarce resources (economics) doesn’t make sense to me – and I think the model vs simulation fight is actually a methodological misspecification!