If you’ve tried to buy a car or a home recently—or have even just been to the grocery store—I’m sure you’re aware how much prices have jumped over the past year. John Taylor certainly has an opinion on the topic.
Taylor is a professor at Stanford University. While not a household name, he’s a leader in economic circles. Before Jerome Powell was appointed Fed chair in 2018, Taylor was a candidate for that spot. According to one analysis, Taylor’s name is the one most frequently referenced in Fed policymakers’ meetings. That’s because he’s the creator of what’s known as the Taylor rule. And despite being a frequent critic of Fed policy, he and his rule are well respected. So it’s worth taking a closer look.
The purpose of Taylor’s rule is to provide a framework for interest rates—the federal funds rate, specifically. That’s the interest rate the Fed controls directly. It’s often referred to as the “benchmark rate” and is important because all other interest rates—from mortgages and car loans to savings account rates and bond yields—are set in relation to it. When people talk about the Fed raising or lowering interest rates, this is the rate they’re referring to. So it’s enormously important.
For simplicity, I’ll describe the Taylor rule in words. If you want to see the formula itself, you can find it on the Atlanta Fed’s website. But in general terms, this is how it works: It assumes, as a starting point, that inflation and GDP growth should both be about 2% in normal times. Then, it dictates that the federal funds rate should be adjusted upward if the economy is running above those targets or downward if below. These inputs are referred to as the “inflation gap” and the “output gap.” The formula’s underlying logic is based on the Fed’s official mandate: to maintain “maximum employment, stable prices, and moderate long-term interest rates.”
This, in general, is what the Fed always does. It raises rates when the economy is running too hot, and it lowers them when things are lagging. But Taylor’s framework is different because it offers a formulaic approach to setting rates. That’s in contrast to the Fed’s current process, which is committee-driven. In Taylor’s view, a rule-driven approach would be preferable for two reasons. First, it would keep Fed policymakers disciplined by removing the subjective element that’s a key ingredient in their decision making. Second, a formulaic approach would offer the general public better visibility on where rates are headed. That’s important because in policymaking, a key goal is to avoid surprises that might cause panic.
In fairness, the Fed does try to telegraph its thinking and its intentions. In press releases and in the chair’s Congressional testimony, the Fed provides “forward guidance” to communicate its expectations for the economy and for interest rates. Forward guidance refers to the carefully scripted phrases used by Fed officials, such as “a balanced approach” and “considerable time.” The Fed’s forward guidance is so carefully watched that news outlets literally parse every word. Here’s an example.
The Fed also issues guidance in quantitative form. At each meeting, the Fed’s rate-setting committee polls its members, asking where they see the benchmark rate headed. It then publishes the results in a document called the Summary of Economic Projections.
Taken together, these communications serve a key purpose: They allow the market to adjust gradually to future policy changes. An example is playing out in real time right now. So far this year, the Fed has raised its rate by a total of three-quarters of 1%. However, the rate on many Treasury bonds has increased disproportionately more. That might seem inconsistent—until you look at the forward guidance. In its press release, the committee previewed “ongoing increases.” And its most recent Summary of Economic Projections specifically pointed to an increase of another 1% in the back half of this year. In short, the Fed does a reasonable job communicating its thinking to help the market adjust gradually.
For that reason, Taylor is less concerned with the Fed’s communications. He is much more concerned with its underlying decision-making process. On too many occasions, he argues, the Fed has left rates at unnecessarily low levels for years at a time. This has had the effect of inflating more than one asset bubble—in housing and in technology stocks, among others. Most recently, Taylor says, the Fed was too complacent about rising inflation, repeatedly calling it “transitory.” If it had been following a more rules-based approach, he argues, rates would have risen much earlier. Instead, the Fed had to play catch-up this year when it finally acknowledged that the heightened inflation was not transitory and was, in fact, getting worse. As Taylor put it recently, “they are strikingly behind.”
Fed officials, though, are quick to push back against Taylor’s formulaic prescription. Boston Fed president Eric Rosengren cited 2007 as an example. At the time, the Taylor rule wouldn’t have registered a problem, but Fed officials were able to detect other signs pointing to a crisis, allowing them to respond more quickly.
Former Fed chair Ben Bernanke has made similar arguments. In a 2015 paper, written after his term ended, Bernanke argued that economic policymaking is far too complex to be delegated to a simple formula. Among the complications: While the inflation gap is straightforward to measure, Bernanke points out that there are several ways to measure the output gap, each of which would yield a different number. In short, Bernanke says, it wouldn’t make sense to rely rigidly on a formula when at least one of its inputs is subjective.
There are other issues with the Taylor rule. In addition to the challenge described above, there’s the question of how to weight the two factors in the formula. In Taylor’s original formula, he weighted the inflation and output gaps equally. But arguments could be made for weighting them differently. After all, unemployment and inflation are both scourges. It’s debatable which is worse.
Who’s right in this debate? To be sure, Taylor makes a valid argument. Many agree that the Fed, going back at least to the 1990s, has been too permissive. Rates were held near zero well after the economy had regained its footing after the 2000 downturn. And it held rates near zero for years after the 2008 financial crisis—waiting until the very end of 2015. And though Bernanke strenuously disagrees with Taylor’s criticism, he also validates it by taking the time to write a rebuttal.
At the same time, Bernanke and fellow policymakers have a point when they argue that a formula—any formula—would be necessarily imperfect because the inputs are subjective. Indeed, on the Fed’s website, it describes several alternative formulas, each of which has its own reasonable basis. Also, as Eric Rosengren noted, formulas can’t see around corners.
As an individual investor, what should you conclude from this debate? In my view, the Fed’s Summary of Economic Projections says it all. It’s a dry document, but if you read between the lines, it tells us a lot. For starters, it includes not just the average opinion of committee members but also the range of views. Also, each update includes the committee’s prior view, making it easy to see how its forecasts have changed over time. The upshot: Just as the stock market is unpredictable, so too is government policy. As this document reveals, even policymakers themselves—probably the most well informed economic observers anywhere—regularly disagree with one another and regularly change their views.
That’s why an investor’s best bet, in my view, is to always maintain a balanced portfolio. At any given time, some investment will always feel out of step with current trends. And yet, that itself may be the best litmus test of a well diversified portfolio. If there’s always at least something that doesn’t feel quite right, then your portfolio might be just right.