Skip to content

Models and measurement in economics 2

↓ Jump to responses

Download the WEA commentaries issue ›

By Merijn Knibbe

Comparing macro-models and macro-measurements: an overview of the differences between the National Accounts and the neoclassical Dynamic Stochastic General Equilibrium models.

[This post benefitted from comments and tweets by Diane Coyle and Josh Mason]

Introduction: the difference between (neoclassical) macro-models and macro statistics1

Neoclassical macroeconomics has, contrary to earlier macroeconomic paradigms, not succeeded in engendering a research program aimed at gathering and presenting macroeconomic data consistent with its macro DSGE (Dynamic Stochastic General Equilibrium) models. About these earlier paradigms: the Institutionalists had the National Bureau of Economic Research (NBER) with its business cycle and income studies and the Bureau of Labour Statistics (BLS) which calculated purchasing power (Rutherford 2011). Keynes redefined the national accounts to make them compatible with his theory (Mitra-Kahn, 2011). And in an earlier epoch the works of Adam Smith and Marshall also lead to changes in the kind of macro-statistics gathered by the government (Mitra-Kahn 2011). Neoclassical macro has no such thing. The famous Fitoussi-Sen-Stiglitz report, which aimed at a reformation of macro statistics away from ‘Keynesian’, aggregate expenditure oriented GDP and towards a more inclusive kind of statistical overview does mention the core variable of neoclassical macroeconomics, social utility (Fitoussi, Sen, Stiglitz 2009). But this is mere lip service and it advises to complement, and not to replace, the National Accounts and GDP with a whole dashboard of indicators about inequality and poverty and social inclusion and the like, which are for instance readily available from the Eurostat site. And it explicitly does not advise to design a single comprehensive estimate of intertemporal social utility. There are no neoclassical macro-statistics, none about utility and none about the all-important ‘rational expectations’. This is consistent with the neoclassical macro research program. From the very beginning this program has been conceived as a program which aimed to prove that a Walrasian model (which explains the economy as a set of individual markets which necessarily tends to an intertemporal equilibrium with factors of production profitably employed) could explain the behaviour of existing macroeconomic statistics. As such it can be understood as an endeavour to counter the Keynesian and Institutional criticisms of neoclassical economics, not as a research program which aimed to estimate new data. Greg Mankiw, a leading proponent of neo-classical macroeconomics, is clear about this. In his famous textbook (7th edition) the chapter which introduces neoclassical macroeconomics starts with a William Bragg quote: “The important thing in science is not so much to obtain new facts as to discover new ways of thinking about them” (Mankiw, 2012). These new ways are, when reading the chapter, clearly the ways described by the Walrasian market model (although another leading neoclassical economist, remarkably, starts to backtrack on this, eight years after the crisis: Blanchard 2016). When we consult the writings of one of the mayor institutionalists and head of the NBER, Wesley Mitchell, we see examples of ‘new facts’. For instance, on the properties of business cycles and the findings of Keynes, based on the National Accounts as redefined by Keynes (the finding that government expenditure can lead to a lasting increase in employment and production.2 Such findings, to this day, fruitfully influence our thinking about the macro-economy. But they are not complemented by new ‘neoclassical’ facts.3 Consistent with this lack of interest in defining and finding new data the recent history of modern macroeconomics by De Vroey hardly mentions macroeconomic data gathering (the National Accounts!) but explains the development of macroeconomics as if this development is an almost purely theoretical endeavour (De Vroey, 2016).4 Looking at it from the opposite angle: many sciences make use of often elaborate compendia which map chemical substances, psychological disorders, different kinds of rocks or, in the case of the National Accounts, concepts and definitions of economic variables in an exhaustive way (my favourite: the periodic table of chemistry/physics). There is no such thing for the main element of the neoclassical models: intertemporal social utility, nor even for its operationalization: intertemporal discounted consumption, nor for the proper measurement of rational expectations, ‘natural’ unemployment or the ‘natural’ rate of interest, or, for that matter, for all kinds of technicalities such as Calvo pricing or stochastic disturbances.

Problems connected to this research agenda increasingly led neoclassical macro-economists to ignore or change the very data which they set out to explain. Involuntary unemployment was ignored right away (Knibbe 2016), the neoclassical concept of capital which, as it does not recognize asset price inflation or unproduced capital such as subsoil natural gas, is at loggerheads with empirical estimates of capital. Likewise government investment and government production of goods and services were either misunderstood or misinterpreted and defined as ‘wasteful’ by definition (Stähler and Tomas 2011; Iwata 2012). Also, variables are sometimes used in a loose, non-rigorous way: in the models consumption is generally defined as the purchasing of goods and services but when it’s convenient it is suddenly defined as the use of acquired goods and services (Jones 2009). This at first sight sounds quite understandable – there is a difference between a car and an icecream. But it is not consistent with the statistical definition and therewith the data of the National Accounts which the neoclassical models try to explain. Tellingly, the modellers tend to use detrended variables to ‘calibrate’ the models. This means that they use smoothed variables in a loose way (because these calibrations are not based on rigorous methods) to make up a set of parameters to get the models to work (Tovar 2008). Despite all these problems with data which were not consistent with the models, neoclassical modellers did not come up with independently measured variables which were consistent with their ideas about intertemporal social utility and expectations. On the other hand, the macroeconomic statisticians who estimate the macro-statistics, i.e. the National Accounts, have – as the variables have to be measured – elaborate definitions of the variables which are used, and these are spelled out in elaborate manuals and compendia. When you want to explain the behaviour of these variables with a model you should use the same definitions. There is a good deal of ‘stock and flow consistent’ as well as input-output modelling around, which is consistent with the National Accounts. But the same thing cannot be said for neoclassical macro. All measurement needs theory. But this is a theory without measurement.

This is remarkable! Alfred Marshall in the nineteenth century redefined and extended the concept of economics, in line with the marginalist thinking of his time, to include all private production and employment instead of only the agriculturally and industrially oriented activities emphasized by Adam Smith. His students went on to implement this new definition of the economy in the UK administration, including the statistical office (Mitra-Kahn, 2011). In the USA, where before WW II institutionalism was a dominant economic paradigm, the best students (and friends) of Thorstein Veblen, Wesley Mitchell and Isador Lubin, became the heads of organizations like the National Bureau of Economic Research (NBER) and the Bureau of Labour Statistics (BLS). They used their positions to discover new facts and to define and measure the economy along institutional lines, with a lot of attention to labour, income, the quantification of business cycles and purchasing power (Rutherford, 2011). Simon Kuznets was a student of Mitchell. And Morris Copeland, another self-declared institutionalist, engineered the Flow of Funds (for instance Copeland 1962, this article might well have been titled: ‘How the USA paid for the war’). John Maynard Keynes himself supervised the introduction of new statistical concepts into the nascent national accounts in the UK as well as sidelining Simon Kuznets and his welfare oriented approach, in the USA, to enable measurements consistent with his emphasis on total expenditure, total income and total production including government production and expenditure (Keynes 1940). In the UK he even managed to establish a new government statistical office (the present Office for National Statistics or ONS) to do this (Mitra-Kahn 2011). That was next to a Herculean bureaucratic feat, science as science should be. Theory and measurement moved in tandem and the fingerprints of the institutional economists as well as John Maynard Keynes are all over our economic statistics, from the dating of business cycles to the concept of consumer price inflation, while Keynes emphasized aggregate monetary expenditure, including government expenditure and set out to define and measure this. It is important to spell out why he did this. His clear aims were to assure at the same time a maximum war effort, limited inflation (war time inflation would, in his view, lead to arbitrary and in all probability unjust changes in the distribution of consumption which, as an aggregate, would necessarily decline) and a just distribution of (lower) consumption in the sense that especially people in the low income brackets had to be protected. Keynes was well aware that, to be able to do this (and common opinion seems to be that things worked out much better than during WW I), the British needed much better information on expenditure than hitherto available. It is important to note that Keynes did not understand aggregate expenditure and production as monolithic entities – to the contrary!

When it turned out that the monetary concepts used by Keynes were not compatible with the Walrasian model – which is fundamentally about non-monetary ‘utility’ – the neoclassical economists should either have ditched their model or have tried to directly estimate variables consistent with this model, such as the natural rate of interest or rational expectations or, indeed, social utility, which figure so prominently in their ideas. This is not what happened.

So, the once intimate connection between macro-statisticians and macro-modellers has been lost (see also Bos 2013, Bos 2003). On the one hand we are left with the modern National Accounts which still aim to gauge the (different aspects) of ‘Keynesian’ aggregate monetary spending (including, in a very Keynesian way, spending financed by net credit and borrowing, including trade credits), which are routinely gathered in almost all countries. On the other hand there are models which seem to use the same variables but which attach a different theoretical meaning to these variables – while the basic variable is non-monetary, i.e. ‘intertemporal social utility’. This rift between the National Accounts and macroeconomic DSGE models is of course not a good thing. An example: the broad national accounts nowadays also include estimates of household and company borrowing and lending, including trade credits, as well as monetary statistics and labour market statistics. Before and after 2008 we (or at least the macro-statisticians) knew who was borrowing and lending and where debt levels were heading, but this was not incorporated into the models, at least not the neoclassical ones. – The ideas of an economist like Richard Koo (2012) about ‘balance sheet recessions’, which were not based on DSGE modelling, did take these data seriously. More regard for the empirical findings of economic statisticians might before 2008 have led to a considerations of their drawbacks, the recent remarks of a neoclassical modeller like Olivier Blanchard about these models (including his wish to ditch ideas about these models such those put forward by Mankiw in his textbook) are in fact an eloquent testimony of the danger to rely on restricted models (Blanchard 2016). If I am right, this means that an investigation of the disconnect between neoclassical macro-models and National Accounts macro-statistics is warranted.

An overview

The overview below/overleaf is based upon the literature in the list but the elements of the overview are not separately annotated. A separate article will spell this out (a draft is available from the author on request).

  National accounts DSGE models
Basic model The circular flow of various monetary streams of total income (wages, profits), total expenditure (investments, consumption) and total production, powered by myriads of transactions made by millions of households and businesses as well as by the government. The future physical and monetary return on investments is unknown. One or two representative households which optimize non-monetary social utility by making a choice between consumption and investments now and in the future, taking account of a kind of internal discount rate and a known growth rate of production.
Production boundary.

 

 

 

 

 

All monetary production of new goods and services, including crime, black markets and non-market government production as well as production by ‘NPISH’ (churches, unions, sports clubs etc.). There is however a major imputation for the assumed value of rent of owner occupied houses (non-monetary production valued at a monetary market price). Except for these houses, use of consumer durables is not counted as consumption. Non-monetary external costs are not counted, items like ‘consumer surplus’ are not relevant. Market production minus production of banks minus public goods and services; non-monetary banks (i.e. banks which do not have a licence to create money) are increasingly incorporated into the models. Consumption is generally taken to be the purchasing of goods and services. If convenient, it is defined as the use of goods. Production of goods and services by the government is defined as a cost. Production of NPISH is neglected.
Definition of variables The national accounts (and the related Flow of Funds data and monetary and labour statistics) have internationally recognized official compendia which conceptualize and define the variables as coherently and consistently as possible. There are, to my knowledge, not yet any formal compendia which describe and define the variables in the DSGE models (like the sector households: are hospitals part of this? Jails? NPISH organizations? It is not clear).
Relation to welfare or prosperity The model is monetary and has NO direct relation to prosperity. The ‘volume’ of total production (real GDP) is often taken to be a metric of the level and growth of prosperity, partly for its own sake and partly because it is often closely related to (un)employment. The composition of consumption, investments and the like can also be gauged but is, surprisingly, much less used to measure prosperity though Eurostat takes a shot at this with ‘Actual Individual Consumption’. The sum of present and (discounted) future ‘Social utility’ is taken to be a metric of prosperity ‘par excellence’ and society is assumed to optimize this, given constraints. NO clear definition of utility is given and no independent estimates of utility and the discount rate are provided. For example: there are a few DSGE models which do incorporate government production of goods and services (roads, education) into ‘utility’ but no clear guidelines exists.
Nature of the model in relation to economic ‘schools’ Partly classical (the definition of capital including nonproducible capital), partly (old)-Keynesian (the emphasis on total monetary expenditure, regardless of: the ultimate goal of expenditure; the possibility of involuntary unemployment, the role of credit), partly institutional (the detailed sub-sectors, the importance of income and income inequality, the inclusion of NPISH, the treatment of the government and the pervasive role of lending and credit). Neoclassical. Markets are supposed to lead to optimal outcomes in the medium run and the government and the central banks are supposed to optimize the working of markets. The ‘representative consumer’, ‘social utility’ and the homogenous character of capital are quintessential neoclassical concepts.
Market clearing required? No. Medium run market clearing and return to ‘equilibrium’ takes place by assumption.
Nature of the goods and services  Heterogenous and changing, relative prices and quantities change over time which leads to changing sectoral structure of the economy (and the stock of capital). This includes the paradox that a sector which experiences a fast increase of output but with even faster declining output prices will decline as a share of the total economy. Homogenous, intertemporal relative prices and quantities are set; in a sense the rational expectations about probabilities of future events influence today’s structure of prices and production.
Basic coordination principle Accounting relations caused by monetary transactions, including debts and credits. All monetary transactions lead to offsetting changes in accounts for at least two agents (my new liability is your new financial asset). Markets, the government, NPISH and household transactions power such transactions, the National Accounts often (changes in) net positions, not gross flows. Market transactions including expected future transactions, ex-ante market clearing assumed.
Structure of production  Detailed sectoral and subsectoral subdivisions including of financial companies, government production and NPISH. The central bank is the only organization with an own sub-subsector, monetary banks are modelled as a kind of subsidiaries of the central bank. No or very limited sub-sectoral subdivision, sectoral division excludes Monetary banks but includes central bank (which therefore implicitly also consists of the monetary banks).
Basic actors Households, firms, government, external sector, financial institutions Households, central bank
Basic method of estimation Aggregation of micro data, continous source of criticism. Care is taken to make historically and internationally consistent estimates. Especially new products and changing relative prices make this complicated. There is NO aggregation of micro data on utility or expectations. Use of often detrended macro data to calibrate main variables, calibration means that researchers have some degrees of freedom to use parameters which differ from the detrended data.
Linkages to other models Labour market accounts, flow of funds, input-output models, environmental accounts (like the relation of CO2 production to the structure of production and final demand), international value chains.  Detrended national account variables are used as an inspiration to calibrate model parameters, the volume of GDP investment, consumption and exports and imports are used resource constraint.
Nature of money Credit originates money and money-like assets. Credit (including trade credits) is originated via transactions between often private agents; credit and lending enables ex-post accounting identities to be ‘true’, even without market clearing (a company in foreclosure which does not pay wages that are due can be seen as an extreme example of borrowing from employees). Loanable funds, government created.

Literature

Bokan, Noury, Anton Gerali, Sandra Gomes, Pascal Jacquinot and Massimiliano Pisani (2015). ‘EAGLE-FLI. A model for the macroeconomic analysis of banking sector and financial frictions in the euro area’. Paper presented at the Dynare conference 2015, available here. Assed 4 May 2016

Blanchard, Olivier (2016). ‘How to teach intermediate macroeconomics after the crisis’, Real time economics issues watch, available here.

Bos, Frits (2003). The national accounts as a tool for analysis and policy: past, present and future. Eagle Statistics, Berkel en Rodenrijs.

Bos, F. (2013), ´Meaning and measurement of national account statistics´. Paper provided at the Political Economy of Economic Metrics conference, available here

Buiter, Willem (2009). ‘The unfortunate uselessness of most ‘state of the art’ monetary economics.’ Financial Times 3 march 2009. Available here. Assessed 12 May 2016

Christoffel, Kai, Günter Coenen and Anders Warne (2008). ‘The new area-wide model of the Euroa area. A micro-founded open-economy model for forecasting and policy analysis’. ECB working paper series 944. October 2008. Available here.

Clark, John Bates (1908 [1899]). ‘The distribution of wealth. New York: Mac Millan Company. Available here.

Copeland, Morris (1962). ‘Some Illustrative Analytical Uses of Flow-of-Funds Data’ in: Conference on research in Icome and Wealth, The Flow-of Funds Approach to Social Accounting: Appraisals, Analysis, and Applications pp. 195-238. Princeton University Press: Princeton. Available here.

Coyle, Diana (2014). GDP: a brief but affectionate history. Princeton, Princeton university press.

European Central Bank (2012A). Manual on MFI balance sheet statistics. Frankfurt: ECB. Available here.

European Central Bank (2012B). Central bank statistics as a servant of two separate mandates – price stability and mitigation of systemic risk. Proceedings of the sixth ECB conference on statistics, 17 and 18th April 2012. Available here.

Eurostat (2016). ‘Greenhouse gas emissions by industries and households’, Statistics explained. Available here.

Fitoussi, Jean-Paul. Amartya Sen and Joseph Stiglitz (2009). Report of the committee on the measurement of economic performance and social progress. Paris. Available here.

Frederick, Shane, George Loewenstein and Ted O’Donaghue (2002). Time discounting and time preference: a critical review. Journal of Economic Literature XL pp. 351-401.

Gomes, Sandra, Pascal Jacquinot and Massimiliano Pisani (2010). ‘The EAGLE. A model for analysis of macroeconomic interdependence in the Euroa Area’. ECB working paper series 1195, May 2010. Available here.

Iwata, Yasuharu (April 2012). ‘Non- wasteful government spending in an estimated open economy DSGE model: two fiscal policy puzzles revisited’. Cabinet office, Economic and Social Research Institute discussion paper series no. 285. Available here.

Jones, Charles (2009). ‘Chapter 20. Preliminary. Consumption’. Available here.

Keynes, John Maynard (1940). How to pay for the war. MacMillan and co: Melbourne. Available here.

Knibbe, Merijn (2016). ‘Models and measurement in economics’, World Economics Association Newsletter 6-2 pp. 3-7. Available here.

Komlos, John (2016), ‘Growth of income and welfare in the U.S., 1979-2011’, NBER working paper series, Working Paper 22211. Available here, assessed 5 May 2016.

Koo, Richard (2011). ‘The world in balance sheet recession: causes, cure, and politics’, Real-world economics review no. 58, 12 December 2011, pp.19-37. Available here

Loewe, Germán (2006). ‘The development of a theory of rational intertemporal choice’. Special paper 80. Available here.

Mankiw, Gregory (2012 ). Principles of economics (7th edition). Stanford: Cengage learning.

Mehrling, Perry (2012). ‘A money view of credit and debt’. INET lecture, available here.

Mitra-Kahn, Benjamin Hav (2011), Redefining the economy. How the ‘economy’ was invented in 1620 and has been redefined ever since. PhD thesis, City University, London. Available here.

Office for National Statistics (2016), ‘National Accounts articles: Alternative measures of real households disposable Income and the saving ratio’. Available here.

Půlpánová, Lenka (2013). ‘Understanding government consumption’. Statistika 93-2 pp. 15-29. Available here.

Rutherford, M. (2011), The institutional movement in American economics 1918-1947. Science and social control. Cambridge University press, New York.

Samuelson, Paul (1937). ‘A Note on Measurement of Utility’. Review of Economic Studies 4 no. 2, pp. 155-161.

Stähler, Nikolai, Carlos Tomas (2011). Fimod – A DSGE model for fiscal policy simulations. Banco de España Documentos de Trabajo 1110. Available here.

Tovar, Camilo (2008). ‘DSGE models and central banks’, BIS working paper no. 258. Available here, assessed 29 April 2016.

 

  1. This post is the second in a series which sets out to compare the concepts of economic variables in the macro economic statistics (i.e. the National Accounts) and the neoclassical macro models. The first post showed that a disconnect exists. This post sets out to compare the basic properties of the measurement model and the neoclassical model. Subsequent articles will look at variables in detail. The work of Mitra-Kahn made me write a somewhat lengthy introduction to this post which would have fitted better in the first post, but at that time I was not aware of these ideas.
  2. His ‘How to pay for the war’ is the critical publication, mind that this was published in February 1940.
  3. Many core neoclassical variables, like the natural rate of interest or the natural rate of unemployment, are not directly observable.
  4. Thumbing through the size of the Nobel prize committee it strikes the mind that almost all non-economic prizes are awarded for the (enabling of) the discovery of new facts.

From: pp.5-10 of World Economics Association Newsletter 6(3), June 2016
https://www.worldeconomicsassociation.org/files/Issue6-3.pdf

Download WEA commentaries Volume 6, Issue No. 3, June 2016 ›

1 response

  • David Harold Chester says:

    You seem to have missed out my model which contains most of the criteria that are in your paper. My model along with an explanation about how it can be used for teaching can be found in the open literature: SSRN 2600103 “A Mechanical Model for Teaching Macroeconomics” (and also in my book about the way this model is developed and what else can be analyzed using it).

Respond to this article

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Please note that your email address will not be published.