A day after the Fed's latest rate hike, it is appropriate to discuss Taylor rules. Named after a famous 1993 paper (1.7Mb PDF) by John B Taylor, these are reaction functions where the monetary policy instrument (i.e. interest rates) responds to inflation and the output gap. While simple, more often than not they describe the path of official rates rather well.
There are several well-known limitations of using Taylor rules. The first is that rigid commitment to an instrument rule, by not leaving any room for judgement or discretion, may encourage unhelpful market speculation or prevent a bank dealing promptly with a crisis. Fed Chairman Alan Greenspan made this point in his speech to the American Economic Association in January 2004, noting that:
..the prescriptions of formal rules can, in fact, serve as helpful adjuncts to policy, as many of the proponents of these rules have suggested. But at crucial points, like those in our recent policy history - the stock market crash of 1987, the crises of 1997-98, and the events that followed September 2001 - simple rules will be inadequate as either descriptions or prescriptions for policy.
Second, the Taylor rule doesn't work very well in low inflation or deflationary economies such as Japan (see Kuttner & Posen 2004, The difficulty of discerning what’s too tight: Taylor rules and Japanese monetary policy, PDF).
The third problem is that Taylor rules based on real-time data can differ significantly from those using historical data, due to revisions. If initial output gap estimates are substantially revised, for example, then so will be the interest rate implied by the Taylor rule. As Athanasios Orphanides argued in an influential 1998 Federal Reserve paper, Monetary Policy Rules Based on Real-Time Data (later published in the American Economic Review, September 2001):
Taylor's rule has received considerable attention in large part because he demonstrated that the simple rule described the actual behavior of the federal funds rate rather surprisingly well. But as is well known, the actual variables required for implementation of such a rule|potential output, nominal output, and real output are not known with any accuracy until much later. That is, the rule does not describe a policy that the Federal Reserve could have actually followed.
Given this uncertainty, central banks tend to be cautious in their interpretation of economic data, and to smooth the path of interest rates. There is now quite a large literature about smoothing; see for example Mehra 2001 and Castelnuovo 2003.
For a Taylor rule junkie like me, though, too much is never enough. So I was fascinated by a new IMF working paper by Alina Carare and Robert Tchaidze, The Use and Abuse of Taylor Rules: How Precisely Can We Estimate Them? The paper usefully surveys the literature, documenting "potential abuses" and complications in the use of Taylor rules. It also highlights "inconsistencies in estimating simple monetary policy rules and their implications for policy advice" by simulating an array of different variants on the Taylor rule through a macroeconomic model. They found that most fitted quite well:
We simulate a macroeconomic model with a backward reaction function similar to Taylor (1993). We estimate different versions of a policy rule, using these simulated data. Under certain circumstances, estimations document an illusionary presence of a lagged interest rate, or of forward-looking behavior. Our results are consistent with the fact that several authors found very different versions of monetary policy rules, all fitting the U.S. data well.
But the authors conclude these results should not be taken at face value:
These results demonstrate that there may be a high degree of statistical illusion, through which an impression of a more sophisticated monetary policy than it actually is, is found.
Don't say you haven't been warned.