Wednesday, 30 July 2014

Methodological seduction

Mainly for macroeconomists or those interested in economic methodology. I first summarise my discussion in two earlier posts (here and here), and then address why this matters.

If there is such a thing as the standard account of scientific revolutions, it goes like this:

1) Theory A explains body of evidence X

2) Important additional evidence Y comes to light (or just happens)

3) Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)

4) Theory B can explain X and Y

5) After a struggle, theory B replaces A.

For a more detailed schema due to Lakatos, which talks about a theory’s ‘core’ and ‘protective belt’ and tries to distinguish between theoretical evolution and revolution, see this paper by Zinn which also considers the New Classical counterrevolution.

The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?

My argument is that it does not. Arnold Kling makes the point clearly. In his stage one, Keynesian/Monetarist theory adapts to stagflation, using the Friedman/Phelps accelerationist Phillips curve. Stage two involves rational expectations, the Lucas supply curve and other New Classical ideas. As Kling says, “there was no empirical event that drove the stage two conversion.” I think from this that Paul Krugman also agrees, although perhaps with an odd quibble.

Now of course the counter revolutionaries do talk about the stagflation failure, and there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable. The key question, however, is whether points (3) and (4) are correct. On (3) Zinn argues that changes to Keynesian theory to account for stagflation were progressive rather than contrived, and I agree. I also agree with John Cochrane that this adaptation was still empirically inadequate, and that further progress needed rational expectations (see this separate thread), but as I note below the old methodology could (and did) incorporate this particular New Classical innovation.

More critically, (4) did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, or in my view the Great Depression either. Yet the NCCR was successful. So why did (5) happen, without (3) and (4)?

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example. Ironically the innovation that had allowed conventional macro to explain stagflation, the accelerationist Phillips curve, also made it appear unable to adapt to rational expectations. But if that was all, then you need to ask why New Classical ideas could have been gradually assimilated into the mainstream. Many of the counter revolutionaries did not want this (as this note from Judy Klein via Mark Thoma makes clear), because they had an (ideological?) agenda which required the destruction of Keynesian ideas. However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM), which is what I spent a lot of time in the 1990s doing.

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions. How serious this problem was, relative to the alternative of being theoretically consistent but empirically wide of the mark, was seldom asked.   

So why does this matter? For those who are critical of the total dominance of current macro microfoundations methodology, it is important to understand its appeal. I do not think this comes from macroeconomics being dominated by a ‘self-perpetuating clique that cared very little about evidence and regarded the assumption of perfect rationality as sacrosanct’, although I do think that the ideological preoccupations of many New Classical economists has an impact on what is regarded as de rigueur in model building even today. Nor do I think most macroeconomists are ‘seduced by the vision of a perfect, frictionless market system.’ As with economics more generally, the game is to explore imperfections rather than ignore them. The more critical question is whether the starting point of a ‘frictionless’ world constrains realistic model building in practice.

If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous! 

Noah Smith, who does believe stagflation was important in the NCCR, says at the end of his post: “this raises the question of how the 2008 crisis and Great Recession are going to affect the field”. However, if you think as I do that stagflation was not critical to the success of the NCCR, the question you might ask instead is whether there is anything in the Great Recession that challenges the methodology established by that revolution. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.  

Is there a chance of the older methodology making a comeback? I suspect the place to look is not in academia but in central banks. John Cochrane says that after the New Classical revolution there was a split, with the old style way of doing things surviving among policymakers. I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks. At the Bank of England, their main model used to be a SEM, was replaced by a hybrid DSGE/SEM, and was replaced in turn by a DSGE model. The Fed operates both a DSGE model and a more old-fashioned SEM. It is in central banks that the limitations of DSGE analysis may be felt most acutely, as I suggested here. But central bank economists are trained by academics. Perhaps those that are seduced are bound to remain smitten.


Tuesday, 29 July 2014

UK Fiscal Policy from 2015 with shocks

One indirect comment I have received on the numbers set out in this post is that they ignore the possibility of major negative shocks hitting the economy. That is not really fair, because a major reason for aiming for such historically low levels of debt to GDP in the long term was to allow for such shocks. However it seems reasonable to ask what sort of shocks these plans might accommodate, so here is an illustration.

A key idea in my paper with Jonathan Portes is that if interest rates are expected to hit the Zero Lower Bound (ZLB), the central bank and fiscal council should cooperate to produce a fiscal stimulus package designed to allow interest rates to rise above that bound. So the key questions become how often such ZLB episodes might occur, and what size of stimulus packages might be required.

The chart below assumes that the next ZLB episode will occur in 2040. Thereafter they occur every 40 years. This is all complete guesswork of course. Each ZLB episode requires a fiscal stimulus package which increases the budget deficit by 10% of GDP in the first year, 10% of GDP in the second, and 5% in the third. For comparison, the Obama stimulus package was worth a little over 5% of GDP. So this is much bigger, but that package was clearly too small, and I’ve also allowed something extra for the automatic stabilisers.

These shocks are superimposed on the ‘medium’ adjustment path that I gave in the previous post. This involves much less austerity than George Osborne’s plans. Whether it is less draconian than the other political parties’ plans is less clear. For example with Labour, there is a commitment to achieve current balance by 2020. To get the total deficit we need to add public investment. Current plans have public investment at around 1.5% of GDP, but if investment was raised to 2.5% of GDP, this would be consistent with the path shown here.

Medium debt reduction path with shocks

So the 2040 crisis starts with debt to GDP at just under 50%, and sends it back up to levels close to but below current levels. In the next crisis the debt to GDP ratio peaks at 50% of GDP. At the turn of the century we settle down to an average of around 30% of GDP, with the ratio never rising above 45%.

With a chart that ends in 2200, many will feel that this is all rather unreal. So perhaps we can compartmentalise discussion into two questions: is this long run average of 30% about right, and are we prepared for the next crisis? Although the 30% figure seems quite prudent by historical standards, our paper does give some reasons why you might want a lower long run average. However this debate really is for the future - it should have no impact on what happens before 2020.

Are we prepared for the next crisis? For the size and timing of the crisis I have chosen my answer would be a clear yes. In this recession UK debt to GDP has risen to higher levels, and there has been no market panic. Political leaders became obsessed with debt for two reasons: misunderstanding the Eurozone crisis (where OMT has clearly demonstrated the nature of the misunderstanding), and because austerity suited other agendas. I am a sufficient optimist to think that another 25 years is long enough to allow most people to figure that out.


Monday, 28 July 2014

If minimum wages, why not maximum wages?

I was in a gathering of academics the other day, and we were discussing minimum wages. The debate moved on to increasing inequality, and the difficulty of doing anything about it. I said why not have a maximum wage? To say that the idea was greeted with incredulity would be an understatement. So you want to bring back price controls was once response. How could you possibly decide on what a maximum wage should be was another.

So why the asymmetry? Why is the idea of setting a maximum wage considered outlandish among economists?

The problem is clear enough. All the evidence, in the US and UK, points to the income of the top 1% rising much faster than the average. Although the share of income going to the top 1% in the UK fell sharply in 2010, the more up to date evidence from the US suggests this may be a temporary blip caused by the recession. The latest report from the High Pay Centre in the UK says:



“Typical annual pay for a FTSE 100 CEO has risen from around £100-£200,000 in the early 1980s to just over £1 million at the turn of the 21st century to £4.3 million in 2012. This represented a leap from around 20 times the pay of the average UK worker in the 1980s to 60 times in 1998, to 160 times in 2012 (the most recent year for which full figures are available).”

I find the attempts of some economists and journalists to divert attention away from this problem very revealing. The most common tactic is to talk about some other measure of inequality, whereas what is really extraordinary and what worries many people is the rise in incomes at the very top. The suggestion that we should not worry about national inequality because global inequality has fallen is even more bizarre

What lies behind this huge increase in inequality at the top? The problem with the argument that it just represents higher productivity of CEOs and the like is that this increase in inequality is much more noticeable in the UK and US than in other countries, yet there is no evidence that CEOs in UK and US based firms have been substantially outperforming their overseas rivals. I discussed in this post a paper by Piketty, Saez and Stantcheva which set out a bargaining model, where the CEO can put more or less effort into exploiting their monopoly power within a company. According to this model, CEOs in the UK and US have since 1980 been putting more bargaining effort than their overseas counterparts. Why? According to Piketty et al, one answer may be that top tax rates fell in the 1980s in both countries, making the returns to effort much greater.

If you believe this particular story, then one solution is to put top tax rates back up again. Even if you do not buy this story, the suspicion must be that this increase in inequality represents some form of market failure. Even David Cameron agrees. The solution the UK government has tried is to give more power to the shareholders of the firm. The High Pay Centre notes that: “Thus far, shareholders have not used their new powers to vote down executive pay proposals at a single FTSE 100 company.”, although as the FT report shareholder ‘revolts’ are becoming more common. My colleague Brian Bell and John Van Reenen do note in a recent study “that firms with a large institutional investor base provide a symmetric pay-performance schedule while those with weak institutional ownership protect pay on the downside.” However they also note that “a specific group of workers that account for the majority of the gains at the top over the last decade [are] financial sector workers .. [and] .. the financial crisis and Great Recession have left bankers largely unaffected.”

So increasing shareholder power may only have a small effect on the problem. So why not consider a maximum wage? One possibility is to cap top pay as some multiple of the lowest paid, as a recent Swiss referendum proposed. That referendum was quite draconian, suggesting a multiple of 12, yet it received a large measure of popular support (35% in favour, 65% against). The Swiss did vote to ban ‘golden hellos and goodbyes’. One neat idea is to link the maximum wage to the minimum wage, which would give CEOs an incentive to argue for higher minimum wages! Note that these proposals would have no disincentive effect on the self-employed entrepreneur. 

If economists have examined these various possibilities, I have missed it. One possible reason why many economists seem to baulk at this idea is that it reminds them too much of the ‘bad old days’ of incomes policies and attempts by governments to fix ‘fair wages’. But this is an overreaction, as a maximum wage would just be the counterpart to the minimum wage. I would be interested in any other thoughts about why the idea of a maximum wage seems not to be part of economists’ Overton window

Sunday, 27 July 2014

Understanding fiscal stimulus can be easy

There seems to be a bit of confusion about fiscal stimulus. I think most people understand what is going on in undergraduate textbook models, but some seem less sure of what might be different in more modern New Keynesian models. This seems to revolve around three issues:

1) In Traditional Keynesian (TK) models any fiscal giveaway seems to work, whereas in New Keynesian (NK) analysis the type of fiscal policy seems to matter much more.

2) Is the dynamics of how policy works different in TK and NK models?

3) In TK models fiscal and monetary policy seem interchangeable, but NK models imply fiscal policy is a second best tool. Why is that?

In this post I will just cover the first two issues.

The best way to answer these questions is to ask how NK models differ from TK models, and where this matters. To keep things simple, let’s just think about a closed economy. I’ll also assume real interest rates are fixed, which switches off monetary policy. This is not quite the same as fiscal policy in a liquidity trap, because expected inflation may change, but that is a complication I want to avoid for now.

First, a difference that does not matter much for (1) and (2). The most basic NK model assumes the labour market clears, while the TK model does not. I tried to explain why that was not critical here.

The difference that really matters is consumption. In TK models consumption just depends on current post tax income, while in the most basic NK model consumption depends on expectations of discounted future income, and expectations are rational. This makes NK models dynamic, whereas in the textbook TK model we do not need to worry about what happens next.

This immediately gives us the best known difference between NK and TK: Ricardian Equivalence. A tax cut today to be financed by tax increases in the future leaves discounted labour income unchanged, and so consumption remains unchanged. However this is only a statement about tax changes. Changes in government spending have much the same impact as they do in TK models.

In particular, if we have a demand gap of X that lasts for Y years, we can fill it by raising government spending by X for Y years, and pay for it by reducing government spending in later years. A practical example of what I call a pure government spending stimulus would be bringing forward public investment. As taxes do not change, then for given real interest rates consumption need not change.

Nick Rowe sets up a slightly different problem, where there is a wedge shaped gap to fill. In that case government spending can initially rise, but then gradually fall back, filling the wedge. Same logic. Nick says that a policy that would work equally well in theory is to initially leave government spending unchanged, but then let it gradually fall, so that it ends up permanently lower. This is not nearly as paradoxical as Nick suggests. By lowering government spending in the long run, taxes will be lower in the long run. Consumers respond by raising consumption now and forever, so it is consumption that fills the gap. It works in theory, but may not in practice because consumers cannot be certain government spending will be lower forever. It is also an odd experiment that combines demand stabilisation with permanently changing the size of the state. So much simpler to do the obvious thing, and raise government spending to fill the demand gap. As fiscal stimulus in a liquidity trap does not require fine tuning, implementation lags are unlikely to be critical.  

So if we restrict ourselves to fiscal changes that just involve changing the timing of government spending, fiscal demand management in NK models works in much the same way as in TK models, which is simple and intuitive. It really is just a matter of filling the gap.


Saturday, 26 July 2014

Why strong UK employment growth could be really bad news

Some of the better reporting and interviews with George Osborne yesterday did try and put the strongish 2014Q2 output growth in context. Yet the much stronger growth in UK employment continues to be greeted by many as unqualified good news - even by some who should know better. So, rather than trying to be satirical, let me attempt to be as clear as I can. Those who already understand the problem can skip the next three paragraphs.

By identity, strong employment growth relative to output growth means a reduction in labour productivity. In the short term when unemployment is above its ‘natural’ (non-inflationary) level, falling labour productivity is good news. It means that a given level of output is being produced by more people, so there are less people unemployed. This is good news because our evidence is that the costs of being unemployed are very high. Of course if more workers are producing the same amount of stuff, their real wages will fall, but that just means that the cost of a recession is being evenly spread rather than being concentrated among the unemployed.

Now lets move on until unemployment has fallen to its natural rate. It is what happens next that is crucial. If labour productivity starts increasingly rapidly, such that we make up all or nearly all of the ground lost over the last five years, that will be fantastic. Rapid productivity growth will bring rapid growth in real wages, meaning that much of the unprecedented fall in real wages we have seen in recent years is reversed. After a decade or so, UK living standards will end up somewhere around where they would have been if there had been no recession. The UK ‘productivity puzzle’ will have been a short term affair that economists can mull over at their leisure. Analysis will not look kindly on the policies that allowed output to be so low for so long, but - hysteresis effects aside - that will be history.

The alternative is that labour productivity does not make up lost ground. If this happens, the average UK citizen will be 15-20% poorer forever following the Great Recession. Living standards in the UK, which before the recession appeared to be growing at least as fast as those in other major established economies, will have fallen back substantially relative to citizens in the US and Europe. This is the alternative that most forecasters, including the OBR (see chart reproduced here), are assuming will happen. 

So the absence of labour productivity growth is good in the short term, but is potentially disastrous in the long term. The problem is that the absence of growth in labour productivity since the recession is unprecedented (see chart below): nothing like this has happened in living memory. The reason to be concerned is that the rapid growth in productivity required to catch up the ground already lost is also unprecedented for the UK, which is why most economists assume it will not happen. Which brings me to another puzzle.



As long as I can remember, UK governments have been obsessed by long term productivity growth, and its level relative to the US, France and Germany. They have put considerable effort into understanding what influences this growth, and what policies can help increase it. This was true when UK labour productivity was steadily increasing at a slightly slower rate than in other countries, or increasing at a slightly faster rate. Given this, you would imagine that the UK government would be frantic to know what was currently going on. Why has UK productivity stalled, why are we falling behind our competitors at such a fast rate?

GDP per hour worked: source OECD

Instead this government seems strangely indifferent. If they have an explanation for the absence of UK productivity growth, I have not seen it. You generally need to understand something before you know what to do about it. Instead the Prime Minister and Chancellor would seem to prefer not to talk about it, because it ‘feeds into’ the opposition’s complaints about low wages. This really is irresponsible. Is it simple arrogance? - they know what is good for the economy, even if they do not understand it. Or is it indifference? - we do not care too much about long term UK prosperity, as long as you keep voting for us. Or is it just too embarrassing to admit that the most calamitous period for UK living standards since the WWII has happened on their watch.

Thursday, 24 July 2014

Synthesis!? David Beckworth's Insurance Policy

Could it be that New Keynesians and Market Monetarists can converge on a common policy proposal? I really like David Beckworth’s Insurance proposal against ‘incompetent’ monetary policy. Here it is.

1) Target the level of nominal GDP (NGDP)

2) “the Fed and Treasury sign an agreement that should a liquidity trap emerge anyhow [say due to central bank incompetence] and knock NGDP off its targeted path, they would then quickly work together to implement a helicopter drop. The Fed would provide the funding and the Treasury Department would provide the logistical support to deliver the funds to households. Once NGDP returned to its targeted path the helicopter drop would end and the Fed would implement policy using normal open market operations. If the public understood this plan, it would further stabilize NGDP expectations and make it unlikely a helicopter drop would ever be needed.”

In fact I like it so much that Jonathan Portes and I proposed something very like it in our recent paper. There we acknowledge that outside the Zero Lower Bound (ZLB), monetary policy does the stabilisation. But we also suggest that if the central bank thinks there is more than a 50% probability that they will hit the ZLB, they get together with the national fiscal council (in the US case, the CBO) to propose to the government a fiscal package that is designed to allow interest rates to rise above the ZLB.

There we did not specify what monetary policy should be, but speaking just for myself I have endorsed using the level of NGDP as an intermediate target for monetary policy, so there is no real disagreement there. A helicopter drop is a fiscal stimulus involving tax cuts plus Quantitative Easing (QE). Again we did not specify that the central bank had to undertake QE as part of its proposed package, but I think we both assumed that it would (outside the Eurozone, where for the moment we can just say it should). I think a central bank could suggest that an income tax cut might not be the most effective form of fiscal stimulus (compared to public investment, for example), but let’s not spoil the party by arguing over that.

Now this does not mean that Market Monetarists and New Keynesians suddenly agree about everything. A key difference is that for David this is an insurance against incompetence by the central bank, whereas Keynesians are as likely to view hitting the ZLB as unavoidable if the shock is big enough. However this difference is not critical, as New Keynesians are more than happy to try and improve how monetary policy works. The reason I wrote this post was not because of these differences in how we understand the world. It was because I thought New Keynesians and Market Monetarists could be much closer on policy than at least some let on. I now think this even more. 



Wednesday, 23 July 2014

Macroeconomic innumeracy

Anthony Seldon is perhaps best known for his biographies of recent UK Prime Ministers. He had a column in the FT recently, which suggested that the Prime Minister’s team had done rather better than popular perception might suggest. Two sentences caught my attention: “Credit for sticking to the so-called Plan A on deficit reduction must be tempered by the government’s reluctance to cut more vigorously” and “Downing Street insiders can claim to have managed to steer…..the recovery of a very battered economy”.

The first sentence suggests that the government stuck to its original 2010 deficit reduction plan, but it should have cut spending by more than this plan. I disagree with the opinion in the second part of the sentence, but that is not the issue here. The problem is that the factual statement in the first part of the sentence is very hard to justify. The numbers suggest otherwise, as Steven Toft sets out here. The second sentence also indicates no acquaintance with the numbers. As the well known (I thought) NIESR chart shows, this has been the slowest UK recovery this century - including those in the 1920s and 1930s. The financial crisis certainly battered the UK, but it also hit the US pretty hard too! Yet average growth 2011-13 in the US was 2.2%, in the UK 1%. The idea that macroeconomic mismanagement left the UK economy in a peculiar mess before the financial crisis is a politically generated myth which is also divorced from the data, as I have argued on a number of occasions.

In one sense it is unfair to single Anthony Seldon out in this respect, because I hear similar mistakes all the time from UK political commentators who profess to be, and may honestly believe they are, objective when it comes to macroeconomic reporting. I suspect the problem is threefold. First, the common feature of these mistakes is that they are repeated endlessly by the government and its supporters. Second, there is group self-affirmation - what Krugman calls ‘Very Serious People’ talk to each other more often than they talk to people acquainted with the data. Third, when some of this group do look for economic expertise, they often talk to ‘experts’ in the City or read the Financial Times. Unfortunately, both sources can and do have their own agendas.

Yet in another sense it is not unfair, because Seldon is a historian, and historians stress the importance of accessing primary sources. The main positive point I want to make is that political commentators need to check the data if they want to avoid making macroeconomic statements that are factually incorrect.